Search Results
Found 2 results
510(k) Data Aggregation
(174 days)
VELMENI for DENTISTS (V4D) is a concurrent-read, computer-assisted detection software intended to assist dentists in the clinical detection of dental caries, fillings/restorations, fixed prostheses, and implants in digital bitewing, periapical, and panoramic radiographs of permanent teeth in patients 15 years of age or older. This device provides additional information for dentists in examining radiographs of patients' teeth. This device is not intended as a replacement for a complete examination by the dentist or their clinical judgment that considers other relevant information from the image, patient history, or actual in vivo clinical assessment. Final diagnoses and patient treatment plans are the responsibility of the dentist.
This device includes a Predetermined Change Control Plan (PCCP).
V4D software medical device comprises of the following key components:
- Web Application Interface delivers front-end capabilities and is the point of interaction between the device and the user.
- Machine Learning (ML) Engine delivers V4D's core ML capabilities through the radiograph type classifier, condition detection module, tooth numbering module, and merging module.
- Backend API allows interaction between all the components, as defined in this section, in order to fulfill the user's requests on the web application interface.
- Queue receives and stores messages from Backend API to send to AI-Worker.
- AI-Worker accepts radiograph analysis requests from Backend API via the Queue, passes gray scale radiographs to the ML Engine in the supported extensions (jpeg and png), and returns the ML analysis results to the Backend API.
- Database and File Storage store critical information related to the application, including user data, patient profiles, analysis results, radiographs, and associated data.
The following non-medical interfaces are also available with VELMENI for DENTISTS (V4D):
- VELMENI BRIDGE (VB) acts as a conduit enabling data and information exchange between Backend API and third-party software like Patient Management or Imaging Software
- Rejection Review (RR) module captures the ML-detected conditions rejected by dental professionals to aid in future product development and to be evaluated in accordance with VELMENIs post-market surveillance procedure.
This device includes a Predetermined Change Control Plan (PCCP).
This 510(k) clearance letter for VELMENI for DENTISTS (V4D) states that the proposed device is unchanged from its predicate (VELMENI for Dentists cleared under K240003), except for the inclusion of a Predetermined Change Control Plan (PCCP). Therefore, all performance data refers back to the original K240003 clearance. The provided document does not contain the specific performance study details directly, but it references their applicability from the predicate device.
Based on the provided text, the response will extract what details are available and note where specific information is not included in this document, but referred to as existing from the predicate device's clearance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document refers to the acceptance criteria and performance data existing from the predicate device (K240003). It also mentions that the PCCP updates the acceptance criteria for Sensitivity, Specificity, and Average False Positives to match the lower bounds of the confidence interval demonstrated by the originally cleared models' standalone results. However, the specific values for these criteria and the reported performance are not explicitly stated in this document.
Note: The document only states that MRMC results concluded the effectiveness of the V4D software in assisting readers to identify more caries and identify more fixed prostheses, implants, and restorations correctly. Specific quantitative performance metrics (e.g., Sensitivity, Specificity, AUC, FROC, etc.) are not provided in this document.
2. Sample Size Used for the Test Set and Data Provenance
The document states:
- "The new models will be evaluated on a combined test dataset with balanced ratio of historical and new data for validation to avoid overfitting historical data from repeated use."
- "The new test data is fully independent on a site-level from training/tuning data, and the test dataset remains at least 50% US data."
Specific sample size for the test set is not provided in this document.
Data Provenance: At least 50% US data, including both historical and new data. It is a retrospective dataset for testing as it uses both historical and new data collected implicitly beforehand.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not specify the number of experts used and their qualifications for establishing ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not specify the adjudication method used for the test set (e.g., 2+1, 3+1, none).
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
Yes, an MRMC comparative effectiveness study was done.
The document states: "MRMC results concluded the effectiveness of the V4D software in assisting readers to identify more caries and identify more fixed prostheses, implants, and restorations correctly."
Effect Size: The document does not provide a specific quantitative effect size of how much human readers improve with AI vs. without AI assistance. It only makes a qualitative statement about improved identification of conditions.
6. Standalone Performance Study
Yes, a standalone (algorithm only without human-in-the-loop performance) study was done.
The document states: "The acceptance criteria for Sensitivity, Specificity and Average False Positives have been updated to match the lower bounds of confidence interval demonstrated by the originally cleared models' standalone results." This implies that standalone performance metrics were evaluated for the original clearance.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for a dental imaging device assisting dentists, it is highly likely that expert consensus from dental professionals (dentists or dental radiologists) would have been used for establishing ground truth. The mention of "dental professionals" rejecting ML-detected conditions in the "Rejection Review (RR)" module also hints at expert review for ground truth establishment.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set. It mentions "new and existing training and tuning data" for re-training.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. However, given the context of a medical device aiding dentists in clinical detection, it is highly probable that ground truth would have been established through expert annotations or consensus from qualified dental professionals.
Ask a specific question about this device
(241 days)
VELMENI for DENTISTS (V4D) is a concurrent-read, computer-assisted detection software intended to assist dentists in the clinical detection of dental caries, fillings/restorations, fixed prostheses, and implants in digital bitewing, periapical, and panoramic radiographs of permanent teeth in patients 15 years of age or older. This device provides additional information for dentists in examining radiographs of patients' teeth. This device is not intended as a replacement for a complete examination by the dentist or their clinical judgment that considers other relevant information from the image, patient history, or actual in vivo clinical assessment. Final diagnoses and patient treatment plans are the responsibility of the dentist.
V4D software medical device comprises of the following key components:
- Web Application Interface delivers front-end capabilities and is the point of interaction between the device and the user.
- Machine Learning (ML) Engine delivers V4D's core ML capabilities through the radiograph type classifier, condition detection module, tooth numbering module, and merging module.
- Backend API allows interaction between all the components, as defined in this section, in order to fulfill the user's requests on the web application interface.
- Queue receives and stores messages from Backend API to send to Al-Worker.
- Al-Worker accepts radiograph analysis requests from Backend API via the Queue, passes gray scale radiographs to the ML Engine in the supported extensions (jpeg and png), and returns the ML analysis results to the Backend API.
- Database and File Storage store critical information related to the application, including user data, patient profiles, analysis results, radiographs, and associated data.
The following non-medical interfaces are also available with VELMENI for DENTISTS (V4D):
- VELMENI BRIDGE (VB) acts as a conduit enabling data and information exchange between Backend API and third-party software like Patient Management or Imaging Software
- Rejection Review (RR) module captures the ML-detected conditions rejected by dental professionals to aid in future product development and to be evaluated in accordance with VELMENIs post-market surveillance procedure.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for "Velmeni for Dentists (V4D)":
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present "acceptance criteria" in a tabular format with predefined thresholds. Instead, it reports the performance metrics from both standalone and clinical (MRMC) studies. The acceptance criteria are implicitly met if the reported performance demonstrates safety, effectiveness, and substantial equivalence to the predicate device.
Implicit Acceptance Criteria & Reported Device Performance:
| Metric / Feature | Acceptance Criteria (Implicit) | Reported Device Performance (Velmeni for Dentists (V4D)) |
|---|---|---|
| Standalone Performance | Demonstrate objective performance (sensitivity, specificity, Dice coefficient) for the indicated features across supported image types. | Caries (Lesion-Level Sensitivity): Bitewing: 72.8%, Periapical: 70.6%, Panoramic: 68.3% Fixed Prosthesis (Lesion-Level Sensitivity): Bitewing: 92.1%, Periapical: 81.0%, Panoramic: 74.5% Implant (Lesion-Level Sensitivity): Bitewing: 81.1%, Periapical: 94.5%, Panoramic: 79.6% Restoration (Lesion-Level Sensitivity): Bitewing: 88.1%, Periapical: 76.8%, Panoramic: 72.6% False Positives Per Image (Mean): Caries: 0.24-0.33, Fixed Prosthesis: 0.01-0.06, Implant: 0.00-0.01, Restoration: 0.10-0.62 Dice Score (Mean): Caries: 77.07-82.77%, Fixed Prosthesis: 91.47-97.09%, Implant: 88.67-95.47%, Restoration: 81.49-90.45% |
| Clinical Performance (MRMC) | Demonstrate that human readers (dentists) improve their diagnostic performance (e.g., sensitivity, wAFROC AUC) when assisted by the AI device, compared to working unassisted, without an unacceptable increase in false positives or decrease in specificity. The device should provide clear benefit. | wAFROC AUC (Aided vs. Unaided): Bitewing: 0.848 vs. 0.794 (Diff: 0.054), Periapical: 0.814 vs. 0.721 (Diff: 0.093), Panoramic: 0.615 vs. 0.579 (Diff: 0.036)Significant Improvements in Lesion-Level Sensitivity, Case-Level Sensitivity, and/or reductions in False Positives per Image (details in study section below). The study states "The V4D software demonstrated clear benefit for bitewing and periapical views in all features. The panoramic view demonstrated benefit..." |
| Safety & Effectiveness | The device must be demonstrated to be as safe and effective as the predicate device, with any differences in technological characteristics not raising new or different questions of safety or effectiveness. | "The results of the stand-alone and MRMC reader studies demonstrate that the performance of V4D is as safe, as effective, and performs equivalent to that of the predicate device, and VELMENI has demonstrated that the proposed device complies with applicable Special Controls for Medical Image Analyzers. Therefore, VELMENI for DENTISTS (V4D) can be found substantially equivalent to the predicate device." |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Test Set Sample Sizes:
- Standalone Performance:
- 600 Bitewing images
- 597 Periapical images
- 600 Panoramic images
- Clinical Performance (MRMC):
- 600 Bitewing images (total caries 315)
- 597 Periapical images (total caries 271)
- 600 Panoramic images (total caries 853)
- Standalone Performance:
-
Data Provenance: The document states that "Subgroup analyses were performed among types of caries (primary and secondary caries; for caries-level sensitivity only), sex, age category, sensor, and study site." This suggests the data was collected from multiple study sites, implying a degree of diversity in the source of the images. However, the specific country of origin of the data is not explicitly stated. The study for initial data development seems to be centered around US licensed dentists and oral radiologists, suggesting US-based data collection, but this is not definitively stated for the entire dataset. The images were collected from various sensor manufacturers (Dexis, Dexis platinum, Kavo, Carestream, Planmeca). The document does not explicitly state whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts:
- Ground truth for both standalone and clinical performance studies was established by three US licensed dentists.
- Non-consensus labels were adjudicated by one oral radiologist.
- Qualifications of Experts:
- US licensed dentists.
- Oral radiologist.
- No further details on their experience (e.g., years of experience) are provided in this summary.
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus with Adjudication.
- Ground truth was initially established by "consensus labels of three US licensed dentists."
- "Non-consensus labels were adjudicated by an oral radiologist." This implies a "3+1" or similar method, where the three initial readers attempt to reach consensus, and any disagreements are resolved by a fourth, independent expert (the oral radiologist).
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, and its effect size
-
Yes, an MRMC comparative effectiveness study was done. It was described as a "multi-reader fully crossed reader improvement study."
-
Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance:
The effect size is presented as the difference in various metrics (wAFROC AUC, lesion-level sensitivity, case-level sensitivity) between aided and unaided modes.-
wAFROC AUC:
- Bitewing: +0.054 (Aided 0.848 vs Unaided 0.794)
- Periapical: +0.093 (Aided 0.814 vs Unaided 0.721)
- Panoramic: +0.036 (Aided 0.615 vs Unaided 0.579)
-
Lesion-Level Sensitivity Improvement (Aided vs. Unaided):
- Bitewing:
- Caries: +12.8% (80.3% vs 67.5%)
- Fixed Prosthesis: +5.5% (95.7% vs 90.2%)
- Implant: +32.0% (93.2% vs 61.3%)
- Restoration: +16.7% (90.8% vs 74.1%)
- Periapical:
- Caries: +24.8% (73.4% vs 48.7%)
- Fixed Prosthesis: +11.1% (91.1% vs 80.0%)
- Implant: +16.4% (95.9% vs 79.5%)
- Restoration: +10.3% (90.6% vs 80.3%)
- Panoramic:
- Caries: +6.5% (27.2% vs 15.1%)
- Fixed Prosthesis: +8.2% (88.8% vs 80.5%)
- Implant: +8.7% (88.3% vs 79.6%)
- Restoration: +15.6% (73.0% vs 57.4%)
- Bitewing:
-
The study design also included measures of false positives per image (Mean FPs per Image) and case-level specificity to evaluate potential adverse effects of aid. While some specificities slightly decreased (e.g., Periapical Caries: -10.3%), the document generally concludes "The V4D software demonstrated clear benefit for bitewing and periapical views in all features. The panoramic view demonstrated benefit though the absolute benefit for caries sensitivity was smaller due to lower overall reader performance. In addition, for the panoramic view, there was a benefit in restoration sensitivity that was somewhat offset by a drop in image-level specificity."
-
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance evaluation was done. It was conducted against the established ground truth. The results are reported in Tables 2 and 3 and include lesion-level sensitivity, case-level sensitivity, false positives per image, case-level specificity, and Dice coefficient for segmentation.
7. The Type of Ground Truth Used
- The ground truth used for both standalone and clinical studies was based on expert consensus with adjudication. Specifically, it was established by "consensus labels of three US licensed dentists, and nonconsensus labels were adjudicated by an oral radiologist."
8. The Sample Size for the Training Set
- The document does not provide the sample size for the training set. It only describes the test set and validation processes.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It focuses solely on the ground truth establishment for the test (evaluation) dataset. It's common for training data ground truth to be established through similar expert labeling processes, but this is not mentioned in the provided summary.
Ask a specific question about this device
Page 1 of 1