Search Results
Found 1 results
510(k) Data Aggregation
(103 days)
Videa Perio Assist is a radiological semi-automated image processing software device intended to aid dental professionals in the measurements and visualization of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs. Measurements are made available as linear distances or relative percentages.
It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained professionals including, but not limited to, dentists and dental hygienists.
Videa Perio Assist (VPA) software is a cloud-based AI-powered medical device for the automatic measurement of tooth interproximal alveolar bone level in dental radiographs. The device itself is available as an API (Application Programming Interface) behind a firewalled network. The device returns 1) a series of points with connecting lines measuring the mesial and distal alveolar bone levels associated with each tooth 2) this distance expressed in millimeters and/or as a percentage of the root length.
Videa Perio Assist is accessed by the trained professional through their image viewer. From within the image viewer the user can upload a radiograph to Videa Perio Assist and then review the results. The device outputs a line to identify these points which calculate the interproximal bone level.
The device output will show all applicable measurements from one radiograph regardless of the number of teeth present. If no teeth are present the device outputs a clear indication that there are no identifiable teeth to calculate the interproximal bone level.
The intended users of Videa Perio Assist are trained professionals such as dentists and dental hygienists.
The intended patients of Videa Perio Assist are patients 12 years and above with permanent dentition undergoing routine dental visits or suspected of having interproximal bone level concerns. Videa Perio Assist may only be used with patients with permanent dentition present in the radiograph.
Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided FDA 510(k) summary for Videa Perio Assist:
1. Table of Acceptance Criteria & Reported Device Performance
Videa Perio Assist underwent two primary types of testing: Bench Testing (focused on algorithm precision/recall) and Clinical Testing (focused on algorithm sensitivity, specificity, and accuracy for clinical measurements).
Bench Testing Acceptance Criteria & Performance (Per Tooth Landmark Detection)
| Metric | Acceptance Criteria (Overall) | VPA Performance (Bitewing) | VPA Performance (Periapical - Overall) | VPA Performance (Periapical - CEJ-ABL subgroup) |
|---|---|---|---|---|
| Recall | > 82% | 94.4% | 91.9% | N/A (Not reported specifically for this subgroup for recall) |
| Precision | > 82% | 84.3% | N/A (Not reported overall for periapical) | 79.1% (Did not meet criteria for this subgroup) |
Note: The document notes that for the periapical CEJ-ABL subgroup, precision was 79.1%, meaning it did not meet the acceptance criteria of >82% precision for this specific subgroup, however, this was attributed to difficulty in estimating obscured points on overlapping teeth.
Clinical Testing Acceptance Criteria & Performance (Per Interproximal Bone Level Measurement)
| Metric | Acceptance Criteria (Overall) | VPA Performance (Bitewing) | VPA Performance (Periapical - All) | VPA Performance (Periapical - CEJ->ABL subgroup) | VPA Performance (Periapical - CEJ->RT subgroup) | VPA Performance (Periapical - ABL->RT subgroup) |
|---|---|---|---|---|---|---|
| Sensitivity | > 82% | 92.8% (Met) | 88.3% (Met) | Met | Met | Met |
| Specificity | > 81% | 89.4% (Met) | 87.0% (Met) | Did not meet (for this subgroup) | Met | Met |
| Mean Absolute Error | < 1.5mm | Met | Met | Met | Met | Met |
Note: The document explicitly states for the periapical CEJ-ABL subgroup that specificity "Did not meet acceptance criteria," for the same reason as precision in the bench study—difficulty with obscured points.
2. Sample Sizes Used for the Test Set and Data Provenance
- Bench Testing (Algorithm Standalone Performance):
- Sample Size: 996 radiographs and 16,131 landmarks.
- Data Provenance: Not explicitly stated regarding country of origin, but generally, for FDA submissions, data should reflect the US population or be justifiable for generalizability to the US. The document refers to "US licensed dentists" for clinical testing, implying a US context. The data was collected across two phases (retrospective or prospective is not specified, but the context generally suggests retrospective collection for developing and testing an algorithm).
- Clinical Testing (Human-in-the-loop/Algorithm Assistance Effectiveness):
- Sample Size: 189 radiographs and over 2,350 lines (measurements).
- Data Provenance: Not explicitly stated regarding country of origin, though "US licensed dentists" and "US licensed periodontists" are mentioned, suggesting US data. The study was conducted in "two phases." The type of study (retrospective vs. prospective) is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Bench Testing: "Ground truth labeled across two phases." The specific number or qualifications of the individuals performing this ground truth labeling is not explicitly stated in the provided text.
- Clinical Testing:
- Initial Labeling: "US licensed dentists labeled data across two phases." The number of dentists is not specified.
- Adjudication/Reference Standard Establishment: "two US licensed periodontists adjudicated those labels to establish a reference standard for the study." The specific qualifications (e.g., years of experience) for these periodontists are not detailed beyond being "US licensed periodontists."
4. Adjudication Method for the Test Set
- Bench Testing: Adjudication method not explicitly described, only that ground truthing occurred across two phases.
- Clinical Testing: The ground truth was established by "two US licensed periodontists adjudicat[ing] those labels." This implies a consensus process between the two periodontists, or one reviewing the other's work, but the specific consensus or tie-breaking rule (e.g., 2+1, 3+1) is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study was NOT mentioned.
- The study primarily focused on the standalone performance of the algorithm against a human-established ground truth, not on how human readers' performance improved with AI assistance. The "Clinical Testing" section describes measuring the algorithm's performance against a reference standard, not a comparative study of human performance with and without the device.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Yes, standalone performance was explicitly evaluated. This is what the "Bench Testing" section describes.
- The Videa Perio Assist output (lines and measurements) was scored directly against the ground-truthed landmarks and measurements.
7. Type of Ground Truth Used
- Expert Consensus/Expert Labeling:
- For Bench Testing, ground truth was established by "ground truth labeled across two phases." This implies expert review and labeling of landmarks.
- For Clinical Testing, the reference standard (ground truth) was established by "two US licensed periodontists adjudicat[ing] those labels" initially provided by "US licensed dentists." This is an expert consensus or adjudicated expert labeling ground truth.
- Not Pathology or Outcomes Data.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It only mentions that the "artificial intelligence algorithm was trained with that patient population" (referring to permanent dentition patients).
- However, it does refer to "Bench testing has sensor manufacturer and patient age subgroup analysis for generalizability in a similar method as described in the clinical study generalizability section below. The sensor manufacturer and patient age did not have any outliers in the bench study." This suggests that the training data and evaluation focused on ensuring generalizability across these factors, but the specific size is missing.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It can be inferred that it would follow a similar expert labeling process as the test set (e.g., by dental professionals), but no specifics are provided.
Ask a specific question about this device
Page 1 of 1