Search Results
Found 1 results
510(k) Data Aggregation
(171 days)
Videa Dental AI is a computer-assisted detection (CADe) device that analyzes intraoral radiographs to identify and localize the following features. Videa Dental AI is indicated for the review of bitewing, periapical, and panoramic radiographs acquired from patients aged 3 years or older.
Suspected Dental Findings:
- Caries
- Attrition
- Broken/Chipped Tooth
- Restorative Imperfection
- Pulp Stones
- Dens Invaginatus
- Periapical Radiolucency
- Widened Periodontal Ligament
- Furcation
- Calculus
Historical Treatments:
- Crown
- Filling
- Bridge
- Post and Core
- Root Canal
- Endosteal Implant
- Implant Abutment
- Bonded Orthodontic Retainer
- Braces
Normal Anatomy:
- Maxillary Sinus
- Maxillary Tuberosity
- Mental Foramen
- Mandibular Canal
- Inferior Border of the Mandible
- Mandibular Tori
- Mandibular Condyle
- Developing Tooth
- Erupting Teeth
- Non-matured Erupted Teeth
- Exfoliating Teeth
- Impacted Teeth
- Crowding Teeth
Videa Dental AI (VDA) software is a cloud-based AI-powered medical device for the automatic detection of the features listed in the Indications For Use statement in dental radiographs. The device itself is available as a service via an API (Application Programming Interface) behind a firewalled network. Provided proper authentication and an eligible bitewing, periapical or panoramic image, the device returns a set of bounding boxes and/or segmentation outlines depending on the indication representing the suspect dental finding, historical treatment or normal anatomy detected.
VDA is accessed by the dental practitioner through their dental image viewer. From within the dental viewer the user can upload a radiograph to VDA and then review the results. The device outputs a binary indication to identify the presence or absence of findings for each indication. If findings are present the device outputs the number of findings by finding type and the coordinates of the bounding boxes/segmentation outlines for each finding. If no findings are present the device outputs a clear indication that there are no findings identified for each indication. The device output will show all findings from one radiograph regardless of the number of teeth present.
The intended users of Videa Dental AI are trained dental professionals such as dentists and dental hygienists. For the suspect dental findings indications specifically, VDA is intended to be used as an adjunct tool and should not replace a dentist's review of the image. Only dentists that are performing diagnostic activities shall use the suspect dental finding indications.
VDA should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained dental professionals including, but not limited to, dentists and dental hygienists.
Depending on the specific VDA indication for use, the intended patients of Videa Dental AI are patients 3 years of age and older above with primary, mixed and/or permanent dentition undergoing routine dental visits or suspected of one of the suspected dental findings listed in the VDA indications for use statement above. VDA may be used on eligible bitewing, periapical or panoramic radiographs depending on the indication.
See Table 1 below for the specific patient age group and image modality that each VDA indication for use is designed and tested to meet. VDA uses the images metadata to only show the indications for the patient age and image modalities in scope as shown in Table 1. VDA will not show any findings output for an indication for use that is outside of the patient age and radiographic view scope.
Here's a summary of the acceptance criteria and study details for Videa Dental AI, based on the provided FDA 510(k) Clearance Letter:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state numeric acceptance criteria thresholds for all indications. However, it implicitly states that Videa Dental AI meets its performance requirements by demonstrating statistically significant improvement in detection performance for clinicians when aided by the device compared to unaided performance in the clinical study for certain indications. For standalone performance, DICE scores are provided for caries, calculus, and normal tooth anatomy segmentations.
| Performance Metric / Indication | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Clinical Performance (MRMC Study) | ||
| AFROC FOM (Aided vs. Unaided) | Aided AFROC FOM > Unaided AFROC FOM (statistically significant improvement) | Clinicians showed statistically significant improvement in detection performance with VDA aid for caries and periapical radiolucency with a second operating point. The average aided improvement across 8 VDA indications was 0.002%. |
| Standalone Performance (Bench Testing) | ||
| Caries (DICE) | Not explicitly stated | 0.720 |
| Calculus (DICE) | Not explicitly stated | 0.716 |
| Enamel (DICE) | Not explicitly stated | 0.907 |
| Pulp (DICE) | Not explicitly stated | 0.825 |
| Crown Dentin (DICE) | Not explicitly stated | 0.878 |
| Root Dentin (DICE) | Not explicitly stated | 0.874 |
| Standalone Specificity - Caries (second operating point) | Not explicitly stated | 0.867 |
| Standalone Specificity - Periapical Radiolucency (second operating point) | Not explicitly stated | 0.989 |
2. Sample Size Used for the Test Set and Data Provenance:
- Standalone Performance Test Set:
- Sample Size: 1,445 radiographs
- Data Provenance: Collected from more than 35 US sites (retrospective, implied, as it's for ground-truthing/benchmarking).
- Clinical Performance (MRMC) Test Set:
- Sample Size: 378 radiographs
- Data Provenance: Collected from over 25 US locations spread across the country (retrospective, implied, as it's for ground-truthing/benchmarking).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Standalone Performance Test Set:
- Number of Experts: Three
- Qualifications: US board-certified dentists.
- Clinical Performance (MRMC) Test Set:
- Number of Experts: Not explicitly stated for the initial labeling, but a single US licensed dentist adjudicated the labels to establish the reference standard.
- Qualifications: US licensed dentists labeled the data, and a US licensed dentist adjudicated those labels.
4. Adjudication Method for the Test Set:
- Standalone Performance Test Set: Ground-truthed by three US board-certified dentists. The specific adjudication method (e.g., consensus, majority) is not explicitly detailed beyond "ground-truthed by three...".
- Clinical Performance (MRMC) Test Set: US licensed dentists labeled the data, and a US licensed dentist adjudicated those labels to establish a reference standard. This implies a consensus or expert-review model, possibly 2+1 or similar, where initial labels were reviewed and finalized by a single adjudicator.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- Yes, an MRMC comparative effectiveness study was done.
- Hypothesis Tested:
- H₀: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd ≤ 0
- H₁: AFROC FOMₐᵢdₑd - AFROC FOMᵤₙₐᵢdₑd > 0
- Effect Size:
- Across 8 Videa Dental AI Suspect Dental Finding indications in the clinical study, the average amount of aided improvement over unaided performance was 0.002%.
- For the caries and periapical radiolucency VDA indications (with a second operating point), clinicians had statistically significant improvement in detection performance regardless of the operating point used. The specific AFROC FOM delta is not provided for these, only that it was statistically significant.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
- Yes, a standalone performance assessment was conducted.
- It measured and reported the performance of Videa Dental AI by itself, in the absence of any interaction with a dental professional in identifying regions of interest for all suspect dental finding, historical treatment, and normal anatomy VDA indications.
7. The Type of Ground Truth Used:
- Expert Consensus/Review: The ground truth for both standalone and clinical studies was established by US board-certified or licensed dentists who labeled and/or adjudicated the findings on the radiographs.
8. The Sample Size for the Training Set:
- The document does not explicitly state the sample size for the training set. It mentions the AI algorithms were "trained with that patient population" and "trained with bitewing, periapical and panoramic radiographs," but gives no specific number of images or patients for the training dataset.
9. How the Ground Truth for the Training Set Was Established:
- The document does not explicitly state how the ground truth for the training set was established. It only broadly states that the AI algorithms were trained with a specific patient population and image types. Given the general practice for medical AI, it can be inferred that expert labeling similar to the test set would have been used, but this is not confirmed in the provided text.
Ask a specific question about this device
Page 1 of 1