Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K213795
    Manufacturer
    Date Cleared
    2022-04-21

    (136 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    N/A
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Videa Caries Assist is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize carious lesions. Videa Caries Assist is indicated for use by board licensed dentists for the concurrent review of bitewing (BW) radiographs acquired from adult patients aged 22 years or older.

    Device Description

    Videa Caries Assist (VCA) software is a cloud-based AI-powered medical device for the automatic detection of carious lesions in dental radiographs. The device itself is available as a service via an API (Application Programming Interface) behind a firewalled network. Provided proper authentication and a bitewing image, the device returns a set of bounding boxes representing the carious lesions detected. VCA is accessed by the dental practitioner through their Dental Viewer. From within the Dental Viewer the user can upload a radiograph to VCA and then review the results. The device outputs a binary indication to identify the presence or absence of findings are present the device outputs the coordinates of the bounding boxes for each finding. If no findings are present the device outputs a clear indication that there are no carious lesions identified.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Videa Caries Assist device, based on the provided document:

    Acceptance Criteria and Device Performance

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Standalone Study)Reported Device Performance (Reader Study - Aided)Reported Device Performance (Reader Study - Unaided)
    Overall average AFROC FOMImprovement over unaided reads0.740 (95% CI: 0.721, 0.760)0.739 (95% CI: 0.705, 0.773)0.667 (95% CI: 0.633, 0.701)
    Difference in Overall average AFROC FOM (Aided - Unaided)> 0N/A0.072 (95% CI: 0.047, 0.097, p < 0.0001)N/A
    Overall average Se (image-based)N/A70.8% (95% CI: 68.0, 73.7)N/AN/A
    Overall average PPV (image-based)N/A59.5% (95% CI: 56.5, 62.5)N/AN/A
    Overall average Se (lesion-based, pooled)N/A73.6% (95% CI: 71.1, 76.0)N/AN/A
    Overall average PPV (lesion-based, pooled)N/A64.9% (95% CI: 62.3, 67.6)N/AN/A

    Note: The document primarily focuses on demonstrating superiority in the reader study using AFROC FOM as the primary endpoint. While standalone metrics are reported, explicit acceptance criteria for these specific values are not provided within this document. The implicit acceptance criterion for the clinical reader study was that the AFROC FOM for aided reads is statistically significantly superior to unaided reads.

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Standalone Study: 1034 adult radiographs.
        • Data Provenance: Collected from 10 US sites.
        • Retrospective/Prospective: Not explicitly stated, but the mention of "collected from" suggests retrospective for the purpose of this study.
      • Clinical Data (Reader Study): 226 adult radiographs.
        • Data Provenance: Collected from 10 US sites.
        • Retrospective/Prospective: Not explicitly stated, but the mention of "collected from" suggests retrospective for the purpose of this study.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Standalone Study: Three US board-certified dentists.
      • Clinical Data (Reader Study): Three US board-certified dentists.
    3. Adjudication method for the test set:

      • The document states that the ground truth was "ground-truthed by three US board-certified dentists." It does not explicitly detail an adjudication method beyond this, such as "2+1" or "3+1" (e.g., if at least two out of three agreed, or a tie-breaker by a chief expert). It implies a consensus by the three experts.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Yes, a fully crossed randomized, multiple reader multiple case (MRMC) controlled study was performed.
      • Effect Size: The overall average AFROC FOM for readers aided by VCA was 0.739, compared to 0.667 for unaided reads. The difference was 0.072 (95% CI: 0.047, 0.097; p < 0.0001). This indicates an improvement of 0.072 in AFROC FOM when readers used AI assistance.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a "Bench Testing (Standalone Study)" was conducted. The performance metrics are reported in the table above, including an overall average AFROC FOM of 0.740, image-based Sensitivity of 70.8%, and PPV of 59.5%.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Expert Consensus: The ground truth for both the standalone and reader study test sets was established by "three US board-certified dentists." This implies expert consensus based on their review of the radiographs.
    7. The sample size for the training set:

      • The document does not explicitly state the sample size for the training set. It only mentions "Supervised Deep Learning" as the development technology.
    8. How the ground truth for the training set was established:

      • The document does not explicitly state how the ground truth for the training set was established. It only mentions "Supervised Deep Learning," which necessitates a labeled training set, but the process of labeling is not described.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1