K Number
K251002

Validate with FDA (Live)

Device Name
Videa Dental AI
Manufacturer
Date Cleared
2025-09-19

(171 days)

Product Code
Regulation Number
892.2070
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Videa Dental AI is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize the following features. Videa Dental AI is indicated for the review of bitewing, periapical, and panoramic radiographs acquired from patients aged 3 years or older.

Suspected Dental Findings:

  • Caries
  • Attrition
  • Broken/Chipped Tooth
  • Restorative Imperfection
  • Pulp Stones
  • Dens Invaginatus
  • Periapical Radiolucency
  • Widened Periodontal Ligament
  • Furcation
  • Calculus

Historical Treatments:

  • Crown
  • Filling
  • Bridge
  • Post and Core
  • Root Canal
  • Endosteal Implant
  • Implant Abutment
  • Bonded Orthodontic Retainer
  • Braces

Normal Anatomy:

  • Maxillary Sinus
  • Maxillary Tuberosity
  • Mental Foramen
  • Mandibular Canal
  • Inferior Border of the Mandible
  • Mandibular Tori
  • Mandibular Condyle
  • Developing Tooth
  • Erupting Teeth
  • Non-matured Erupted Teeth
  • Exfoliating Teeth
  • Impacted Teeth
  • Crowding Teeth
Device Description

Videa Dental AI (VDA) software is a cloud-based AI-powered medical device for the automatic detection of the features listed in the Indications For Use statement in dental radiographs. The device itself is available as a service via an API (Application Programming Interface) behind a firewalled network. Provided proper authentication and an eligible bitewing, periapical or panoramic image, the device returns a set of bounding boxes and/or segmentation outlines depending on the indication representing the suspect dental finding, historical treatment or normal anatomy detected.

VDA is accessed by the dental practitioner through their dental image viewer. From within the dental viewer the user can upload a radiograph to VDA and then review the results. The device outputs a binary indication to identify the presence or absence of findings for each indication. If findings are present the device outputs the number of findings by finding type and the coordinates of the bounding boxes/segmentation outlines for each finding. If no findings are present the device outputs a clear indication that there are no findings identified for each indication. The device output will show all findings from one radiograph regardless of the number of teeth present.

The intended users of Videa Dental AI are trained dental professionals such as dentists and dental hygienists. For the suspect dental findings indications specifically, VDA is intended to be used as an adjunct tool and should not replace a dentist's review of the image. Only dentists that are performing diagnostic activities shall use the suspect dental finding indications.

VDA should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained dental professionals including, but not limited to, dentists and dental hygienists.

Depending on the specific VDA indication for use, the intended patients of Videa Dental AI are patients 3 years of age and older above with primary, mixed and/or permanent dentition undergoing routine dental visits or suspected of one of the suspected dental findings listed in the VDA indications for use statement above. VDA may be used on eligible bitewing, periapical or panoramic radiographs depending on the indication.

See Table 1 below for the specific patient age group and image modality that each VDA indication for use is designed and tested to meet. VDA uses the images metadata to only show the indications for the patient age and image modalities in scope as shown in Table 1. VDA will not show any findings output for an indication for use that is outside of the patient age and radiographic view scope.

AI/ML Overview

Here's a summary of the acceptance criteria and study details for Videa Dental AI, based on the provided FDA 510(k) Clearance Letter:

1. Table of Acceptance Criteria and Reported Device Performance:

The document doesn't explicitly state numeric acceptance criteria thresholds for all indications. However, it implicitly states that Videa Dental AI meets its performance requirements by demonstrating statistically significant improvement in detection performance for clinicians when aided by the device compared to unaided performance in the clinical study for certain indications. For standalone performance, DICE scores are provided for caries, calculus, and normal tooth anatomy segmentations.

Performance Metric / IndicationAcceptance Criteria (Implicit)Reported Device Performance
Clinical Performance (MRMC Study)
AFROC FOM (Aided vs. Unaided)Aided AFROC FOM > Unaided AFROC FOM (statistically significant improvement)Clinicians showed statistically significant improvement in detection performance with VDA aid for caries and periapical radiolucency with a second operating point. The average aided improvement across 8 VDA indications was 0.002%.
Standalone Performance (Bench Testing)
Caries (DICE)Not explicitly stated0.720
Calculus (DICE)Not explicitly stated0.716
Enamel (DICE)Not explicitly stated0.907
Pulp (DICE)Not explicitly stated0.825
Crown Dentin (DICE)Not explicitly stated0.878
Root Dentin (DICE)Not explicitly stated0.874
Standalone Specificity - Caries (second operating point)Not explicitly stated0.867
Standalone Specificity - Periapical Radiolucency (second operating point)Not explicitly stated0.989

2. Sample Size Used for the Test Set and Data Provenance:

  • Standalone Performance Test Set:
    • Sample Size: 1,445 radiographs
    • Data Provenance: Collected from more than 35 US sites (retrospective, implied, as it's for ground-truthing/benchmarking).
  • Clinical Performance (MRMC) Test Set:
    • Sample Size: 378 radiographs
    • Data Provenance: Collected from over 25 US locations spread across the country (retrospective, implied, as it's for ground-truthing/benchmarking).

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

  • Standalone Performance Test Set:
    • Number of Experts: Three
    • Qualifications: US board-certified dentists.
  • Clinical Performance (MRMC) Test Set:
    • Number of Experts: Not explicitly stated for the initial labeling, but a single US licensed dentist adjudicated the labels to establish the reference standard.
    • Qualifications: US licensed dentists labeled the data, and a US licensed dentist adjudicated those labels.

4. Adjudication Method for the Test Set:

  • Standalone Performance Test Set: Ground-truthed by three US board-certified dentists. The specific adjudication method (e.g., consensus, majority) is not explicitly detailed beyond "ground-truthed by three...".
  • Clinical Performance (MRMC) Test Set: US licensed dentists labeled the data, and a US licensed dentist adjudicated those labels to establish a reference standard. This implies a consensus or expert-review model, possibly 2+1 or similar, where initial labels were reviewed and finalized by a single adjudicator.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:

  • Yes, an MRMC comparative effectiveness study was done.
  • Hypothesis Tested:
    • H₀: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd ≤ 0
    • H₁: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd > 0
  • Effect Size:
    • Across 8 Videa Dental AI Suspect Dental Finding indications in the clinical study, the average amount of aided improvement over unaided performance was 0.002%.
    • For the caries and periapical radiolucency VDA indications (with a second operating point), clinicians had statistically significant improvement in detection performance regardless of the operating point used. The specific AFROC FOM delta is not provided for these, only that it was statistically significant.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:

  • Yes, a standalone performance assessment was conducted.
  • It measured and reported the performance of Videa Dental AI by itself, in the absence of any interaction with a dental professional in identifying regions of interest for all suspect dental finding, historical treatment, and normal anatomy VDA indications.

7. The Type of Ground Truth Used:

  • Expert Consensus/Review: The ground truth for both standalone and clinical studies was established by US board-certified or licensed dentists who labeled and/or adjudicated the findings on the radiographs.

8. The Sample Size for the Training Set:

  • The document does not explicitly state the sample size for the training set. It mentions the AI algorithms were "trained with that patient population" and "trained with bitewing, periapical and panoramic radiographs," but gives no specific number of images or patients for the training dataset.

9. How the Ground Truth for the Training Set Was Established:

  • The document does not explicitly state how the ground truth for the training set was established. It only broadly states that the AI algorithms were trained with a specific patient population and image types. Given the general practice for medical AI, it can be inferred that expert labeling similar to the test set would have been used, but this is not confirmed in the provided text.

FDA 510(k) Clearance Letter - Videa Dental AI

Page 1

U.S. Food & Drug Administration
10903 New Hampshire Avenue
Silver Spring, MD 20993
www.fda.gov

Doc ID # 04017.08.00

September 19, 2025

VideaHealth, Inc.
℅ Adam Foresman
Director of Quality & Regulatory Affairs
179 South Street
Floor 5
Boston, MA 02111

Re: K251002
Trade/Device Name: Videa Dental AI
Regulation Number: 21 CFR 892.2070
Regulation Name: Medical Image Analyzer
Regulatory Class: Class II
Product Code: MYN
Dated: March 13, 2025
Received: August 18, 2025

Dear Adam Foresman:

We have reviewed your section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (the Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database available at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.

If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.

Page 2

K251002 - Adam Foresman
Page 2

Additional information about changes that may require a new premarket notification are provided in the FDA guidance documents entitled "Deciding When to Submit a 510(k) for a Change to an Existing Device" (https://www.fda.gov/media/99812/download) and "Deciding When to Submit a 510(k) for a Software Change to an Existing Device" (https://www.fda.gov/media/99785/download).

Your device is also subject to, among other requirements, the Quality System (QS) regulation (21 CFR Part 820), which includes, but is not limited to, 21 CFR 820.30, Design controls; 21 CFR 820.90, Nonconforming product; and 21 CFR 820.100, Corrective and preventive action. Please note that regardless of whether a change requires premarket review, the QS regulation requires device manufacturers to review and approve changes to device design and production (21 CFR 820.30 and 21 CFR 820.70) and document changes and approvals in the device master record (21 CFR 820.181).

Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting (reporting of medical device-related adverse events) (21 CFR Part 803) for devices or postmarketing safety reporting (21 CFR Part 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reporting-combination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR Part 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR Parts 1000-1050.

All medical devices, including Class I and unclassified devices and combination product device constituent parts are required to be in compliance with the final Unique Device Identification System rule ("UDI Rule"). The UDI Rule requires, among other things, that a device bear a unique device identifier (UDI) on its label and package (21 CFR 801.20(a)) unless an exception or alternative applies (21 CFR 801.20(b)) and that the dates on the device label be formatted in accordance with 21 CFR 801.18. The UDI Rule (21 CFR 830.300(a) and 830.320(b)) also requires that certain information be submitted to the Global Unique Device Identification Database (GUDID) (21 CFR Part 830 Subpart E). For additional information on these requirements, please see the UDI System webpage at https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/unique-device-identification-system-udi-system.

Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-devices/medical-device-safety/medical-device-reporting-mdr-how-report-medical-device-problems.

For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-

Page 3

K251002 - Adam Foresman
Page 3

assistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).

Sincerely,

Lu Jiang

Lu Jiang, Ph.D.
Assistant Director
Diagnostic X-Ray Systems Team
DHT8B: Division of Radiologic Imaging
Devices and Electronic Products
OHT8: Office of Radiological Health
Office of Product Evaluation and Quality
Center for Devices and Radiological Health

Enclosure

Page 4

DEPARTMENT OF HEALTH AND HUMAN SERVICES
Food and Drug Administration

Form Approved: OMB No. 0910-0120
Expiration Date: 07/31/2026
See PRA Statement below.

Indications for Use

510(k) Number (if known): K251002

Device Name: Videa Dental AI

Indications for Use (Describe)

Videa Dental AI is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize the following features. Videa Dental AI is indicated for the review of bitewing, periapical, and panoramic radiographs acquired from patients aged 3 years or older.

Suspected Dental Findings:

  • Caries
  • Attrition
  • Broken/Chipped Tooth
  • Restorative Imperfection
  • Pulp Stones
  • Dens Invaginatus
  • Periapical Radiolucency
  • Widened Periodontal Ligament
  • Furcation
  • Calculus

Historical Treatments:

  • Crown
  • Filling
  • Bridge
  • Post and Core
  • Root Canal
  • Endosteal Implant
  • Implant Abutment
  • Bonded Orthodontic Retainer
  • Braces

Normal Anatomy:

  • Maxillary Sinus
  • Maxillary Tuberosity
  • Mental Foramen
  • Mandibular Canal
  • Inferior Border of the Mandible
  • Mandibular Tori
  • Mandibular Condyle
  • Developing Tooth
  • Erupting Teeth
  • Non-matured Erupted Teeth
  • Exfoliating Teeth
  • Impacted Teeth
  • Crowding Teeth

FORM FDA 3881 (8/23)
Page 1 of 2
PSC Publishing Services (301) 443-6740 EF

Page 5

DEPARTMENT OF HEALTH AND HUMAN SERVICES
Food and Drug Administration

Form Approved: OMB No. 0910-0120
Expiration Date: 07/31/2026
See PRA Statement below.

Indications for Use

510(k) Number (if known): K251002

Device Name: Videa Dental AI

Indications for Use (Describe)

Videa Dental AI is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize the following features. Videa Dental AI is indicated for the review of bitewing, periapical, and panoramic radiographs acquired from patients aged 3 years or older.

Suspected Dental Findings:

  • Caries
  • Attrition
  • Broken/Chipped Tooth
  • Restorative Imperfection
  • Pulp Stones
  • Dens Invaginatus
  • Periapical Radiolucency
  • Widened Periodontal Ligament
  • Furcation
  • Calculus

Historical Treatments:

  • Crown
  • Filling
  • Bridge
  • Post and Core
  • Root Canal
  • Endosteal Implant
  • Implant Abutment
  • Bonded Orthodontic Retainer
  • Braces

Normal Anatomy:

  • Maxillary Sinus
  • Maxillary Tuberosity
  • Mental Foramen
  • Mandibular Canal
  • Inferior Border of the Mandible
  • Mandibular Tori
  • Mandibular Condyle
  • Developing Tooth
  • Erupting Teeth
  • Non-matured Erupted Teeth
  • Exfoliating Teeth
  • Impacted Teeth
  • Crowding Teeth

FORM FDA 3881 (8/23)
Page 1 of 2
PSC Publishing Services (301) 443-6740 EF

Type of Use (Select one or both, as applicable)
☑ Prescription Use (Part 21 CFR 801 Subpart D) ☐ Over-The-Counter Use (21 CFR 801 Subpart C)

CONTINUE ON A SEPARATE PAGE IF NEEDED.

This section applies only to requirements of the Paperwork Reduction Act of 1995.

DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.

The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:

Department of Health and Human Services
Food and Drug Administration
Office of Chief Information Officer
Paperwork Reduction Act (PRA) Staff
PRAStaff@fda.hhs.gov

"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."

FORM FDA 3881 (8/23)
Page 2 of 2

Page 6

510(k) Summary

K251002
Page 1 of 11

In accordance with 21 CFR 807.87(h) and 21 CFR 807.92 the 510(k) Summary for the Videa Dental AI device is provided below.

1. SUBMITTER

Applicant:VideaHealth, Inc.179 South Street, Floor 5Boston, MA, 02111+1 617-340-9940florian@videa.ai
Contact & Submission Correspondent:Adam ForesmanDirector of Quality & Regulatory AffairsVideaHealth, Inc.+1 617-340-9940adam@videa.ai
Date Prepared:September 17, 2025

2. DEVICE

Device Trade Name:Videa Dental AI
Device Common Name:Dental AI System
Classification Name:Medical image analyzer
Classification Regulation Number:21 CFR 892.2070
Device Class:2
Product Code:MYN

3. PREDICATE DEVICE

Predicate Device:K232384 VideaHealth's Videa Dental Assist

Page 7

510(k) Summary

Page 2 of 11

4. DEVICE DESCRIPTION

Videa Dental AI (VDA) software is a cloud-based AI-powered medical device for the automatic detection of the features listed in the Indications For Use statement in dental radiographs. The device itself is available as a service via an API (Application Programming Interface) behind a firewalled network. Provided proper authentication and an eligible bitewing, periapical or panoramic image, the device returns a set of bounding boxes and/or segmentation outlines depending on the indication representing the suspect dental finding, historical treatment or normal anatomy detected.

VDA is accessed by the dental practitioner through their dental image viewer. From within the dental viewer the user can upload a radiograph to VDA and then review the results. The device outputs a binary indication to identify the presence or absence of findings for each indication. If findings are present the device outputs the number of findings by finding type and the coordinates of the bounding boxes/segmentation outlines for each finding. If no findings are present the device outputs a clear indication that there are no findings identified for each indication. The device output will show all findings from one radiograph regardless of the number of teeth present.

The intended users of Videa Dental AI are trained dental professionals such as dentists and dental hygienists. For the suspect dental findings indications specifically, VDA is intended to be used as an adjunct tool and should not replace a dentist's review of the image. Only dentists that are performing diagnostic activities shall use the suspect dental finding indications.

VDA should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained dental professionals including, but not limited to, dentists and dental hygienists.

Depending on the specific VDA indication for use, the intended patients of Videa Dental AI are patients 3 years of age and older above with primary, mixed and/or permanent dentition undergoing routine dental visits or suspected of one of the suspected dental findings listed in the VDA indications for use statement above. VDA may be used on eligible bitewing, periapical or panoramic radiographs depending on the indication.

See Table 1 below for the specific patient age group and image modality that each VDA indication for use is designed and tested to meet. VDA uses the images metadata to only show the indications for the patient age and image modalities in scope as shown in Table 1. VDA will not show any findings output for an indication for use that is outside of the patient age and radiographic view scope.

Table 1: VDA Indications Scope by Patient Age and Image Modality Type

Videa Dental Assist IndicationPatient Age in ScopeRadiographic View in Scope
Caries3 years and olderBitewing and Periapical
Attrition3 years and olderBitewing and Periapical

Page 8

510(k) Summary

Page 3 of 11

Videa Dental Assist IndicationPatient Age in ScopeRadiographic View in Scope
Broken/Chipped3 years and olderBitewing and Periapical
Restorative Imperfection3 years and olderBitewing and Periapical
Pulp Stone12+ years of age and older with permanent dentitionBitewing and Periapical
Dens Invaginatus3 years & olderBitewing and Periapical
Periapical Radiolucency22 years of age and older with permanent dentitionPeriapical only
Furcation22 years of age and older with permanent dentitionBitewing and Periapical
Calculus3 years and olderBitewing and Periapical
Widened PDL3 years and olderBitewing and Periapical
Historical Treatments: All Indications3 years and olderAll on Bitewing, Periapical & Panoramic except:1.'Screw' VDA historical treatment identification is only on Panoramic images.2.'Plate' VDA historical treatment indication is only on Panoramic images.
Normal Anatomy: All Indications12 years and older1. Impacted Tooth2. Mental Foramen3. Maxillary TuberosityOn Bitewing, Periapical & Panoramic

Page 9

510(k) Summary

Page 4 of 11

Videa Dental Assist IndicationPatient Age in ScopeRadiographic View in Scope
3 years and olderAll other indications are on Bitewing, Periapical & Panoramic except:1.'Mandibular Condyle' VDA normal anatomy indication is only on Panoramic images.

5. INTENDED USE/INDICATIONS FOR USE

Videa Dental AI is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize the following features. Videa Dental AI is indicated for the review of bitewing, periapical, and panoramic radiographs acquired from patients aged 3 years or older.

Suspected Dental Findings:

  • Caries
  • Attrition
  • Broken/Chipped Tooth
  • Restorative Imperfections
  • Pulp Stones
  • Dens Invaginatus
  • Periapical Radiolucency
  • Widened Periodontal Ligament
  • Furcation
  • Calculus

Historical Treatments:

  • Crown
  • Filling
  • Bridge
  • Post and Core
  • Root Canal
  • Endosteal Implant
  • Implant Abutment
  • Bonded Orthodontic Retainer
  • Braces

Normal Anatomy:

  • Maxillary Sinus

Page 10

510(k) Summary

Page 5 of 11

  • Maxillary Tuberosity
  • Mental Foramen
  • Mandibular Canal
  • Inferior Border of the Mandible
  • Mandibular Tori
  • Mandibular Condyle
  • Developing Tooth
  • Erupting Teeth
  • Non-matured Erupted Teeth
  • Exfoliating Teeth
  • Impacted Teeth
  • Crowding Teeth

6. SUBSTANTIAL EQUIVALENCE

Comparison of Indications

Videa Dental AI has the same indications for use statement and intended use as Videa Dental Assist. The only differences are the inclusion of a second output style (segmentation in addition to bounding boxes) and a second operating point (high sensitivity and high specificity) as user toggle settings. Videa Dental AI and Videa Dental Assist both analyze dental radiographs and highlight regions of interest in an image viewer. For the VDA suspect dental finding indications, both devices are only intended as an aid to the trained professional and are not intended to replace the diagnosis by the physician.

Videa Dental AI contains historical treatment and normal anatomy indications. These indications are not intended to be diagnostic aides. They are used for general understanding of features present in a radiograph and to assist the dental practice in patient operations management. These Videa Dental AI indications do not assess quality or the need for treatment of these features.

Videa Dental AI's artificial intelligence algorithms were trained with that patient population and VideaHealth followed the pediatric medical device guidance document among other standards and guidance documents listed in Section 7 below in the design process. Videa Dental AI testing has shown to be safe and effective for patients between the ages of 3 and 21 years of age with primary, mixed or permanent dentition in the image.

Videa Dental AI artificial intelligence algorithms were trained with bitewing, periapical and panoramic radiographs. Videa Dental AI testing has shown to be safe and effective for bitewing, periapical and panoramic radiographs. Panoramic radiographs are only intended to be used on historical treatment and normal anatomy indications which are not diagnostic aides.

Technological Comparisons

Table 2 compares the key technological feature of the subject devices to the predicate device (Videa Dental Assist., K232384).

Page 11

510(k) Summary

Page 6 of 11

Table 2: Device Comparison Table

Proposed DeviceProposed Device
510(k) NumberK251002K232384
ApplicantVideaHealth, Inc.VideaHealth, Inc.
Device NameVidea Dental AIVidea Dental Assist
Classification Regulation892.2070892.2070
Product CodeMYNMYN
Image ModalityX-RayX-Ray
Radiograph View TypeBitewing Images, Periapical Images, and Panoramic Images.Radiograph view type scope is Videa Dental Assist indication specific.Bitewing Images, Periapical Images, and Panoramic Images.Radiograph view type scope is Videa Dental Assist indication specific.
Suspect Dental Findings IndicationsCaries: Active and Secondary Caries at all penetration depthsAdditional Suspect Dental Findings listed in the Videa Dental Assist's Indications For Use statement.Caries: Active and Secondary Caries at all penetration depthsAdditional Suspect Dental Findings listed in the Videa Dental Assist's Indications For Use statement.
Historical Treatment and Normal Anatomy IndicationsIncludedIncluded
Tooth SurfaceFor the caries indication only: Proximal, Buccal/Lingual, Occlusal, Root, Cervical.None of the additional VDA 'Suspect Dental Finding' indications are specific to a tooth surface.For the caries indication only: Proximal, Buccal/Lingual, Occlusal, Root, Cervical.None of the additional VDA 'Suspect Dental Finding' indications are specific to a tooth surface.

Page 12

510(k) Summary

Page 7 of 11

Proposed DeviceProposed Device
Clinical OutputMessage indicating if and how many findings were detected for each enabled Videa Dental AI's indication for use.All Videa Dental AI's indications use a set of togglable bounding boxes around suspected areas of interest.The user has the option to toggle to segmentation view (also called isocontour view) instead of bounding boxes for caries and calculus.The user has the option to toggle between operating points (high sensitivity vs. high specificity) for a caries and periapical radiolucency.The user has the option to toggle normal tooth anatomy segmentations including enamel, pulp, crown dentin and root dentin on and off.Message indicating if and how many findings were detected for each enabled Videa Dental Assist's indication for use.All Videa Dental Assist's indications use a set of toggleable bounding boxes around suspected areas of interest.
Patient PopulationPatients ≥ 3 years of age.Patient age range is Videa Dental Assist indication specific.Patients ≥ 3 years of age.Patient age range is Videa Dental Assist indication specific.
Intended UserDental professionalsDental professionals
Development TechnologySupervised Deep LearningSupervised Deep Learning
Image SourceX-Ray SensorX-Ray Sensor
Image ViewingImage ViewerImage Viewer

Page 13

510(k) Summary

Page 8 of 11

7. PERFORMANCE DATA

Biocompatibility, Sterilization, and Reprocessing

Not applicable. The subject device is a software-only device. There are no direct or indirect patient-contacting components of the subject device. There are no sterile or reprocessed components.

Electrical Safety and Electromagnetic Compatibility (EMC)

Not applicable. The subject device is a software-only device. It contains no electric components, generates no electrical emissions, and uses no electrical energy of any type.

Software Verification and Validation Testing

Software verification and validation testing were conducted and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, "Content of Premarket Submissions for Device Software Functions."

Among others, the following standards and guidance documents were used during the Videa Dental AI design, development, and testing.

  • ISO 14971:2019 Application of Risk Management to Medical Devices.
  • AAMI CR34971:2022 Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning
  • IEC 62304 Edition 1.1 2015-06 Consolidated Version: Medical Device Software - Software Life Cycle Processes
  • Good Machine Learning Practice for Medical Device Development: Guiding Principles October 2021.
  • FDA Content of Premarket Submissions for Device Software Functions (June 14, 2023)

Bench Testing

A Standalone Performance Assessment was conducted to measure and report the performance of Videa Dental AI by itself, in the absence of any interaction with a dental professional in identifying the regions of interest for that specific indication. All suspect dental finding, historical treatment and normal anatomy VDA indications were in scope. The dataset was 1,445 radiographs collected from more than 35 US sites that were ground-truthed by three US board-certified dentists. The same data distribution was used for the new design of Videa Dental AI vs. the predicate Videa Dental Assist.

Because there was no lesion-detection AI training for Videa Dental AI, the predicate Videa Dental Assist generalizability analysis for lesion detection still applies. Generalizability was not reperformed for this analysis.

Page 14

510(k) Summary

Page 9 of 11

The bench study results were:

  • VDA caries had a DICE of 0.720 and calculus had a DICE of 0.716 respectively.
  • The normal tooth anatomy segmentations the following DICE statistics.
    • Enamel is 0.907
    • Pulp is 0.825
    • Crown Dentin is 0.878
    • Root Dentin is 0.874

Animal Testing

Not applicable. Animal studies are not necessary to establish the substantial equivalence.

Clinical Testing

A fully crossed, randomized, multiple reader multiple case (MRMC) controlled study was performed to determine whether the diagnostic accuracy of readers aided by VDA is superior to reader accuracy when unaided by VDA, as determined by the AFROC Figure of Merit (AFROC FOM). The hypothesis to be tested is:

H₀: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd ≤ 0
H₁: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd > 0

where AFROC FOMₐᵢdₑd is the population-mean AFROC FOM for aided reads, and similarly with AFROC FOMᵤₙₐᵢdₑd for unaided reads.

Suspect dental finding VDA indications that had segmentation view and/or a second operating point were in scope of the clinical test. The other indications are unchanged from the predicate Videa Dental Assist's clinical test results. Clinical testing was performed on 378 radiographs collected from over 25 US locations spread across the country. US licensed dentists labeled the data and a US licensed dentist adjudicated those labels to establish a reference standard for the study.

There were N=20 readers that participated in the study and reviewed all images with and without VDA AI predictions in a randomized fashion.

The patients in the dataset were 24% female and 21% male, 15% other and 39% unknown.

There were N=6 sensor manufacturers that had enough samples to perform generalizability statistical analysis on. Those image sensor manufacturers were: AirTechniques, Carestream, Dexis, Gendex, Kavo, and Schick. Tables 5 and 6 describe the distribution of the study for the two significant design input expansions between Videa Dental AI and the predicate Videa Dental Assist; patient age and radiographic view.

Page 15

510(k) Summary

Page 10 of 11

Table 5: Demographic breakdown by age

Subject Age (Years)Percentage
3 - 1128%
12 - 2120%
22 - 4014%
41 - 6014%
61 and older8%
Unknown15%

All images, regardless of patient age, were classified as being primary dentition only, mixed dentition and permanent dentition only.

Table 6: Image breakdown by radiographic view

Radiographic ViewPercentage
Bitewing56%
Periapical44%
PanoramicN/A. Not in scope.

Across 8 Videa Dental AI Suspect Dental Finding indications in the clinical study, there was no statistically significant difference between bounding box and segmentation view types in detection performance. The average amount of aided improvement over unaided performance across these 8 VDA indications was 0.002%. Additionally none of the 8 VDA indications individually had a statistically significant difference between bounding box or segmentation.

The caries and periapical radiolucency VDA indications in the clinical study with a second operating point all showed that clinicians had statistically significant improvement in detection performance regardless of the operating point used. Some clinicians performed better at one setting than the other however all showed clinical benefit regardless of the operating point used.

VDA caries had a standalone specificity of 0.867 for caries' and 0.989 for PRL' second operating points respectively.

No adverse events were observed during the clinical study. Clinical testing demonstrated that the Videa Dental AI meets performance requirements.

Page 16

510(k) Summary

Page 11 of 11

Conclusion

There are no differences in design input scopes between Videa Dental AI and Videa Dental Assist. They also have the same indications for use statements and intended uses. The design changes for the differences between Videa Dental AI and the predicate do not raise different questions of safety and effectiveness as shown in the Videa Dental AI testing. There was no retraining between Videa Dental AI and Videa Dental Assist in the AI model lesion localization identification or other technological differences that raise different questions of safety and effectiveness.

Although there are differences in the testing methodology (namely the inclusion of segmentation view type and a second operating point for certain VDA indications), they do not raise different questions of safety and effectiveness. The calculations methodology for sensitivity, specificity, Alternative Free-response Receiver Operating Characteristic Figure of Merit (AFROC FOM) and other statistical techniques are the same between Videa Dental AI and Videa Dental Assist. Both Videa Dental AI and Videa Dental Assist had the same clinical study acceptance criteria. The results of the bench testing and clinical testing demonstrate that the performance of Videa Dental AI is comparable to that of Videa Dental Assist. Both Videa Dental AI and Videa Dental Assist met their acceptance criteria. Therefore, Videa Dental AI can be found substantially equivalent to Videa Dental Assist.

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.