Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K222746
    Manufacturer
    Date Cleared
    2023-03-27

    (196 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Overjet Caries Assist (OCA) is a radiological, automated, concurrent-read, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, or actual in vivo clinical assessment.

    Device Description

    Overjet Caries Assist (OCA) is a radiological, automated, concurrent-read, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, or actual in vivo clinical assessment.

    OCA is a software-only device which operates in three layers: a Network Layer, a Presentation Layer, and a Decision Layer. Images are pulled in from a clinic/dental office, and the Machine Learning model creates predictions in the Decision Layer and results are pushed to the dashboard, which are in the Presentation Layer.

    The machine learning system with the Decision Layer processes bitewing and periapical radiographs and annotates suspected carious lesions. It is comprised of four modules:

    • Image Preprocessor Module
    • Tooth Number Assignment Module
    • Caries Module
    • Post Processing
    AI/ML Overview

    This document describes the Overjet Caries Assist (OCA) device, a computer-assisted detection (CADe) software intended to aid dentists in the detection and segmentation of caries on bitewing and periapical radiographs.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state numerical acceptance criteria for sensitivity or specificity in a "table" format as initial goals. However, it does state a performance objective for the clinical reader improvement study: "Increase in dentist's sensitivity of greater than 15%". The other metrics are presented as reported performance from standalone and clinical evaluation studies.

    MetricAcceptance Criteria (if stated)Reported Device PerformanceComments
    Standalone PerformanceBitewing Images (n=1,293)
    Overall SensitivityNot explicitly stated76.6% (73.8%, 79.4%)Based on surfaces (27,920)
    Primary Caries SensitivityNot explicitly stated79.9% (77.1%, 82.7%)
    Secondary Caries SensitivityNot explicitly stated60.9% (53.5%, 68.2%)
    Enamel Caries SensitivityNot explicitly stated74.4% (70.4%, 78.3%)
    Dentin Caries SensitivityNot explicitly stated79.5% (75.8%, 83.2%)
    Overall SpecificityNot explicitly stated99.1% (98.9%, 99.2%)
    Primary Caries Dice ScoreNot explicitly stated0.77 (0.76, 0.78)Pixel-level metric for true positives
    Secondary Caries Dice ScoreNot explicitly stated0.73 (0.70, 0.75)Pixel-level metric for true positives
    Enamel Caries Dice ScoreNot explicitly stated0.76 (0.75, 0.77)Pixel-level metric for true positives
    Dentin Caries Dice ScoreNot explicitly stated0.77 (0.76, 0.79)Pixel-level metric for true positives
    Periapical Images (n=1,314)
    Overall SensitivityNot explicitly stated79.4% (76.1%, 82.8%)Based on surfaces (16,254)
    Primary Caries SensitivityNot explicitly stated79.8% (76.0%, 83.7%)
    Secondary Caries SensitivityNot explicitly stated77.9% (71.4%, 84.5%)
    Enamel Caries SensitivityNot explicitly stated67.9% (60.7%, 75.1%)
    Dentin Caries SensitivityNot explicitly stated84.9% (81.3%, 88.4%)
    Overall SpecificityNot explicitly stated99.4% (99.2%, 99.5%)
    Primary Caries Dice ScoreNot explicitly stated0.79 (0.78, 0.81)Pixel-level metric for true positives
    Secondary Caries Dice ScoreNot explicitly stated0.79 (0.77, 0.82)Pixel-level metric for true positives
    Enamel Caries Dice ScoreNot explicitly stated0.75 (0.73, 0.77)Pixel-level metric for true positives
    Dentin Caries Dice ScoreNot explicitly stated0.81 (0.80, 0.82)Pixel-level metric for true positives
    Clinical Evaluation (Reader Improvement)Bitewing Images (n=330)
    Increase in reader sensitivity (overall)> 15%78.5% (assisted) vs. 64.6% (unassisted)Increase = 13.9%. This falls slightly below the stated >15% criterion, though the document concludes it demonstrates a "clear benefit". The text states "overall reader sensitivity improved from 64.6% (56.4%, 72.1%) to 78.5% (72.6%, 83.6%) unassisted vs assisted", if calculated as a direct percentage difference (78.5-64.6 = 13.9), it is slightly below 15%. If interpreted as (assisted/unassisted)-1 * 100 ((78.5/64.6)-1)*100 = 21.5%, then it meets the criterion. The framing in the document implies the latter.
    Overall reader specificity (decrease)Not explicitly stated (implied minimal decrease is acceptable)98.6% (assisted) vs. 99.0% (unassisted)Decrease of 0.4%
    Overall wAFROC AUC (increase)Not explicitly stated0.785 (assisted) vs. 0.729 (unassisted)Increase of 0.055, statistically significant (p<0.001)
    Reader Dice Score (overall)Not explicitly stated0.72 (assisted) vs. 0.66 (unassisted)Average across caries types. Mean increased from 0.67 to 0.76 (Primary), 0.65 to 0.67 (Secondary), 0.65 to 0.74 (Enamel), 0.67 to 0.74 (Dentin).
    Periapical Images (n=330)
    Increase in reader sensitivity (overall)> 15%79.0% (assisted) vs. 65.6% (unassisted)Increase = 13.4%. Similar to bitewing, falls slightly below the stated >15% criterion as a direct percentage difference. If interpreted as ((79.0/65.6)-1)*100 = 20.4%, then it meets the criterion.
    Overall reader specificity (decrease)Not explicitly stated (implied minimal decrease is acceptable)97.6% (assisted) vs. 98.0% (unassisted)Decrease of 0.4%
    Overall wAFROC AUC (increase)Not explicitly stated0.848 (assisted) vs. 0.799 (unassisted)Increase of 0.050, statistically significant (p<0.001)
    Reader Dice Score (overall)Not explicitly stated0.77 (assisted) vs. 0.71 (unassisted)Average across caries types. Mean increased from 0.73 to 0.80 (Primary), 0.69 to 0.74 (Secondary), 0.64 to 0.73 (Enamel), 0.76 to 0.81 (Dentin).

    2. Sample Size for Test Set and Data Provenance:

    • Standalone Testing:

      • Sample Size: 1,293 Bitewing images (27,920 surfaces) and 1,314 Periapical images (16,254 surfaces).
      • Data Provenance: Not explicitly stated regarding country of origin. The images were from "the following sensor manufacturers: Carestream, Dexis, e2v, Gendex, Hamamatsu, Jazz Imaging, ScanX, Schick, Soredex Digora." The context of US licensed dentists for ground truth suggests the data is likely from the US or a similar dental practice environment. The document specifies "Digital files of bitewing and periapical radiographs whose longer edge is greater than 500 pixel resolution."
      • Retrospective/Prospective: Not explicitly stated, but the description of collecting and annotating existing images suggests it was a retrospective study.
    • Clinical Evaluation (Reader Improvement Study):

      • Sample Size: 330 bitewing images (94 containing caries / 236 without caries) and 330 periapical images (94 containing caries / 236 without caries).
      • Data Provenance: Not explicitly stated regarding country of origin, but the "US licensed dentists" implies the study was conducted within the US.
      • Retrospective/Prospective: Not explicitly stated, but similar to the standalone testing, the use of a predefined image set for readers suggests retrospective.

    3. Number of Experts and Qualifications for Ground Truth:

    • Standalone Testing: Ground truth established by consensus of three US licensed dentists. Non-consensus labels were adjudicated by an oral radiologist. No specific years of experience are listed, but "US licensed dentists" and "oral radiologist" imply professional qualifications relevant to dental imaging interpretation.
    • Clinical Evaluation (Reader Improvement Study): Ground truth established by consensus of three US licensed dentists. Non-consensus labels were adjudicated by an oral radiologist. (Same as standalone testing).

    4. Adjudication Method for the Test Set:

    • Standalone Testing & Clinical Evaluation: The method described is consensus of three experts, with a fourth expert (oral radiologist) adjudicating non-consensus cases. This can be described as a "3+1 adjudication" method, where the "1" is specifically for tie-breaking or resolving disagreements among the initial three.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • Yes, a MRMC study was done. This is detailed under "Clinical Evaluation - Reader Improvement."
    • Effect Size / Reader Improvement:
      • Bitewing Images:
        • Overall reader sensitivity improved from 64.6% (unassisted) to 78.5% (assisted). This is an absolute increase of 13.9 percentage points.
        • Weighted AFROC AUC increased from 0.729 (unassisted) to 0.785 (assisted), an increase of 0.055. This increase was statistically significant (p < 0.001).
        • Mean Dice scores increased from 0.67 (unassisted) to 0.76 (assisted) for primary caries. Similar increases were observed for other caries types.
      • Periapical Images:
        • Overall reader sensitivity improved from 65.6% (unassisted) to 79.0% (assisted). This is an absolute increase of 13.4 percentage points.
        • Weighted AFROC AUC increased from 0.799 (unassisted) to 0.848 (assisted), an increase of 0.050. This increase was statistically significant (p < 0.001).
        • Mean Dice scores increased from 0.73 (unassisted) to 0.80 (assisted) for primary caries. Similar increases were observed for other caries types.

    The document concludes that the increase in wAFROC numbers clearly demonstrates improvement in caries detection by dentists when aided by Overjet Caries Assist, aligned with increases in sensitivity and minimal decrease in specificity.

    6. Standalone (Algorithm Only) Performance Study:

    • Yes, a standalone performance study was done. This is detailed under "Standalone Testing." The results for the algorithm's sensitivity, specificity, and Dice scores are provided for both bitewing and periapical images.

    7. Type of Ground Truth Used:

    • Expert Consensus. For both standalone and clinical evaluation studies, the ground truth was "established by consensus of labels of three US licensed dentists, and non-consensus labels were adjudicated by an oral radiologist."

    8. Sample Size for the Training Set:

    • The document does not explicitly state the sample size for the training set. It describes the "Machine Learning model" and its components but does not provide details about the data used for training.

    9. How the Ground Truth for the Training Set Was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test sets used in standalone and clinical performance evaluations.
    Ask a Question

    Ask a specific question about this device

    K Number
    K212519
    Manufacturer
    Date Cleared
    2022-05-10

    (273 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Overjet Caries Assist (OCA) is a radiological, automated, concurrent read, computer-assisted detection software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Device Description

    Overjet Caries Assist (OCA) is a radiological automated concurrent read computer-assisted detection (CAD) software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device provides additional information for the clinician to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.

    OCA is a software-only device which operates in three layers - a Network Layer, a Presentation Layer, and a Decision Layer (as shown in the data flow diagram below). Images are pulled in from a clinic/dental office, and the Machine Learning model creates predictions in the Decision Layer and results are pushed to the dashboard, which are in the Presentation Layer.

    The Machine Learning System within the Decision Layer processes bitewing radiographs and annotates suspected carious lesions. It is comprised of four modules:

    • Image Classifier The model evaluates the incoming radiograph and predicts the ● image type between Bitewing and Periapical Radiograph. This classification is used to support the data flow of the incoming radiograph. As part of the classification of the image type any non-radiographs are classified as "junk" and not processed. These include patient charting information, or other non-bitewing or periapical radiographs. OCA shares classifier and Tooth Number modules with the Overjet Dental Assist product cleared under K210187.
    • . Tooth Number Assignment module - This module analyzes the processed image and determines what tooth numbers are present and provides a pixel wise segmentation mask for each tooth number.
    • Caries module - This module outputs a pixel wise segmentation mask of all carious lesions using an ensemble of 3 U-Net based models. The shape and location of every carious lesion is contained in this mask as the carious lesions' predictions.
    • Post Processing The overlap of tooth masks from the Tooth Number . Assignment Module and carious lesions from the Caries Module is used to assign specific carious lesions to a specific tooth. The Image Post Processor module annotates the original radiograph with the carious lesions' predictions.
    AI/ML Overview

    Acceptance Criteria and Device Performance for Overjet Caries Assist

    The Overjet Caries Assist (OCA) is a radiological, automated, concurrent read, computer-assisted detection software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device's performance was evaluated through standalone testing of the AI algorithm and a clinical reader improvement study.

    1. Table of Acceptance Criteria and Reported Device Performance

    MeasureAcceptance Criteria (Predicate Device Performance)Reported Device Performance (Overjet Caries Assist)
    Reader Improvement Study
    Increase in dentist's sensitivity with AI assistanceApproximately 20% increase in sensitivity for the predicate device. For OCA, a greater than 15% increase in dentist's sensitivity was established as acceptance criteria.Overall reader sensitivity improved from 57.9% to 76.2% (an increase of 18.3 percentage points, satisfying the >15% criterion). - Primary caries: 60.5% to 79.4% (18.9 pp improvement). - Secondary caries: 49.8% to 63.0% (13.2 pp improvement).
    Specificity with AI assistanceNot explicitly defined as an improvement criterion for the predicate, but overall specificity is a key measure.Overall reader specificity decreased slightly from 99.3% to 98.4% (a decrease of less than 1%), deemed acceptable by the applicant as the benefit in sensitivity outweighs this slight decrease.
    AFROC Score (Assisted)The predicate did not explicitly state an AFROC criterion, but improving diagnostic accuracy is implicit.AUC increased from 0.593 (unassisted) to 0.649 (assisted), for an increase of 0.057 (statistically significant, p < 0.001).
    Standalone Performance (AI Algorithm Only)
    Standalone SensitivityNot directly comparable to predicate's standalone AI performance, as the predicate's description focuses on human improvement.Overall standalone sensitivity: 72.0% (95% CI: 62.9%, 81.1%) - Primary caries: 74.4% (95% CI: 64.4%, 84.4%) - Secondary caries: 62.5% (95% CI: 46.6%, 78.4%)
    Standalone SpecificityNot directly comparable to predicate's standalone AI performance.Overall standalone specificity: 98.1% (95% CI: 97.7%, 98.5%)
    Lesion Segmentation (Dice Score)Not explicitly provided for the predicate device.Mean Dice score for true positives: - Primary caries: 0.69 (0.66, 0.72) - Secondary caries: 0.75 (0.71, 0.79)

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size for Test Set: 352 bitewing radiographs (104 containing caries / 248 without caries).
    • Data Provenance: Not explicitly stated in the provided text (e.g., country of origin). However, given the context of U.S. FDA clearance and the use of US-licensed dentists, it is likely that the data is either from the US or representative of populations seen in the US. The type of data is retrospective, as existing radiographs were used.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three US-licensed dentists for initial consensus, and one Dental Radiologist for adjudication of non-consensus labels.
    • Qualifications of Experts: All experts were US-licensed dentists. The adjudicating expert was specifically a Dental Radiologist. No further details on years of experience were provided.

    4. Adjudication Method for the Test Set

    The adjudication method used was a "3+1" approach. Ground truth was initially established by the consensus labels of three US-licensed dentists. Any non-consensus labels were then adjudicated by a Dental Radiologist.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Yes, an MRMC comparative effectiveness study was conducted.

    • Effect Size of Human Readers Improvement with AI vs. without AI assistance:
      • Sensitivity: Overall reader sensitivity improved by 18.3 percentage points (from 57.9% unassisted to 76.2% assisted).
        • For primary caries, sensitivity improved by 18.9 percentage points (60.5% unassisted to 79.4% assisted).
        • For secondary caries, sensitivity improved by 13.2 percentage points (49.8% unassisted to 63.0% assisted).
      • Specificity: Overall reader specificity decreased slightly by 0.9 percentage points (from 99.3% unassisted to 98.4% assisted).
      • AFROC AUC: The average AUC for all readers increased by 0.057 (from 0.593 unassisted to 0.649 assisted). This increase was statistically significant (p < 0.001).
      • Average Dice Scores for Segmentation:
        • Primary caries: Mean Dice scores improved from 0.67 unassisted to 0.69 assisted.
        • Secondary caries: Mean Dice scores improved from 0.65 unassisted to 0.74 assisted. (Note: These segmentation improvements were not statistically significant).

    6. Standalone (Algorithm Only) Performance

    Yes, standalone performance (algorithm only without human-in-the-loop) was conducted.

    • Overall standalone sensitivity: 72.0% (95% CI: 62.9%, 81.1%)
    • Overall standalone specificity: 98.1% (95% CI: 97.7%, 98.5%)
    • Lesion Segmentation (Dice Score):
      • Primary caries: Mean Dice score of 0.69
      • Secondary caries: Mean Dice score of 0.75

    7. Type of Ground Truth Used

    The ground truth used was expert consensus complemented by expert adjudication. Specifically, a consensus of three US-licensed dentists, with non-consensus cases adjudicated by a Dental Radiologist.

    8. Sample Size for the Training Set

    The sample size for the training set is not provided in the excerpt. The document only mentions "training data" in the context of the algorithm's capability to learn during its operation, but not a specific size for its initial training.

    9. How the Ground Truth for the Training Set was Established

    The method for establishing ground truth for the training set is not explicitly detailed in the provided text. It generally states that the algorithm "has been trained," but does not provide information on how the ground truth for that training was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210187
    Manufacturer
    Date Cleared
    2021-05-19

    (114 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Overjet Dental Assist is a radiological semi-automated image processing software device intended to aid dental professionals in the measurements of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs.

    It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained professionals including, but not limited to, dentists and dental hygienists.

    Device Description

    Overjet Dental Assist developed by Overjet Inc, is a radiological semi automated image processing software device intended to aid dental professionals in the measurements of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs.

    Overjet Dental Assist is a cloud native Software as a Medical Device that allows users to automate the measurement of interproximal bone levels for bitewing and periapical radiographs, review associated radiographs, view annotations, modify annotations.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Performance (Bitewing Radiographs)Reported Performance (Periapical Radiographs)
    Sensitivity>85%98.7%88.94%
    Specificity>85%95.0%95.96%
    Mean Absolute Difference (Bone Level Measurement)<1.5mm0.307mm0.353mm
    Mean Absolute Difference (Periapical Root Length Measurement)<1.5mm (implied, as it's a measurement)Not applicable0.567mm

    Note regarding "Periapical Root Length": While the acceptance criteria for sensitivity, specificity, and mean absolute difference are explicitly stated for bone level measurements, the document also reports performance for "Periapical Root Length" with similar metrics and a mean absolute difference. It's reasonable to infer a similar acceptance criterion for MAE for this measurement type.

    2. Sample Sizes and Data Provenance

    • Test Set (Clinical Testing):

      • Sample Size: 65 bitewing and 96 periapical radiographs from 63 subjects.
      • Data Provenance: Retrospective clinical subject data from patients in the United States, 22 years old or older, without primary teeth. No information about ethnicity was available.
    • Test Set (Bench Testing):

      • Sample Size: 2234 bitewing radiographs and 6543 periapical radiographs.
      • Data Provenance: Not explicitly stated, beyond being "ground truth data set utilizing Object Keypoint Similarity assessment." The context suggests this is also retrospective imaging data.
    • Training Set (not explicitly called out as such here, but implied as distinct from the validation sets):

      • Sample Size: Not explicitly stated in the provided text.
      • Data Provenance: Not explicitly stated.

    3. Number of Experts to Establish Ground Truth for Test Set & Qualifications

    • Number of Experts: 3 US licensed dentists for initial labeling, plus 2 US Dental Radiologists for adjudication.
    • Qualifications of Experts: US licensed dentists; US Dental Radiologists. No specific years of experience are mentioned.

    4. Adjudication Method for the Test Set

    • Clinical Testing: The adjudication method was "initial measurements by three US licensed dentists, which were then adjudicated by two US Dental Radiologists." This can be interpreted as a form of expert consensus and review. It's not a standard 2+1 or 3+1, but a multi-expert review process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No explicit MRMC comparative effectiveness study was done to quantify the improvement of human readers with AI assistance versus without. The study focuses on standalone performance against expert-derived ground truth. The document does state, "The overall sensitivity of 88% was consistent with the performance of the three ground truth dentists," which indirectly compares the device's sensitivity to human performance on a specific metric, but this is not a formal MRMC study of human reading with and without AI.

    6. Standalone Performance Study

    • Yes, a standalone study was done. The entire "Clinical Testing" section describes the device's performance (sensitivity, specificity, mean absolute difference) when compared against an expert-established ground truth. The device results are reported directly without human intervention in the reported metrics.

    7. Type of Ground Truth Used

    • Expert Consensus. For the clinical testing, the ground truth was established by three US licensed dentists using a measurement tool, whose measurements were then adjudicated by two US Dental Radiologists. For the bench testing, it mentions "labeled keypoints," implying expert labeling.

    8. Sample Size for the Training Set

    • Not explicitly stated in the provided text. The document refers to "the Overjet met acceptable performance criteria with the following results:" and then lists results for "Bench Testing" and "Clinical Testing," which are typically considered validation/test sets, not the training set itself.

    9. How Ground Truth for Training Set Was Established

    • Not explicitly stated in the provided text, as the size and provenance of the training set itself are not detailed. It can be inferred that similar expert labeling processes would have been used to establish ground truth for training data, but this is not confirmed.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1