Search Results
Found 2 results
510(k) Data Aggregation
(90 days)
Overjet Image Enhancement Assist is an image processing software that can be used for image enhancement in dental radiographs viewed in the Overjet device platform. It is an optional tool to be used for image quality enhancement.
The software improves image quality by:
-
Reducing Noise: Utilizing a learning-based algorithm for noise reduction in bitewing and periapical images.
-
Enhancing Contrast and Sharpness: Applying standard, non-learning based techniques to enhance contrast and sharpness for bitewing, periapical, and panoramic images.
Raw images will be acquired and reviewed using the dental clinics standard imaging acquisition and viewing software. This device is part of the Overjet platform alone, it is not intended to replace their own diagnostic imaging system.
Overjet Image Enhancement is a Software as a Medical Device (SaMD) that enhances dental radiographic images within the Overjet Platform, intended for use by dental providers in clinics or hospitals. It supports routine dental images, including bitewing, periapical, and panoramic images, viewed within the Overjet Platform.
The software enhances image quality by reducing noise with a learning-based algorithm for bitewing and periapical images, and by improving contrast and sharpness using standard, non-learning based techniques. For panoramic images, standard enhancement techniques improve contrast and sharpness without learning-based noise reduction. The enhancement feature can be toggled on and off by the user within the Overjet Platform. AI predictions for findings such as caries, calculus, etc. are run as specified for each FDA cleared device and are run on unenhanced (original) images only. There is no modification to the output of other FDA cleared Overjet devices when the image enhancement feature is applied.
The provided text describes the Overjet Image Enhancement Assist device and its substantial equivalence determination by the FDA. While it states that "Overjet conducted the following performance testing: software verification and validation testing, a study that utilized retrospective data to demonstrate that the software enhanced image quality (quantification report and expert clinical evaluation)," and mentions "All tests passed successfully," it does not provide the specific acceptance criteria or the detailed results of the study that proves the device meets those criteria.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and the reported device performance.
- Sample size used for the test set and data provenance.
- Number of experts and their qualifications for ground truth.
- Adjudication method for the test set.
- MRMC comparative effectiveness study results.
- Standalone performance results.
- Type of ground truth used.
- Sample size for the training set.
- How ground truth for the training set was established.
The document only states the types of tests performed (quantification report for CNR and PSNR, and a Likert expert clinical evaluation) and that "All tests passed successfully," implying that the device met internal, but not explicitly stated, acceptance criteria. It also mentions that the "test methods were highly similar to those of the predicate device."
Ask a specific question about this device
(260 days)
Overjet Calculus Assist (OCalA) is a radiological automated concurrent-read computer-assisted detection software intended to aid in the detection of interproximal calculus deposits on both bitewing and periapical radiographs. The Overjet Calculus Assist surrounds suspected calculus deposits with a bounding box. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of containing calculus deposits. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image or patient history. The system is to be used by professionally trained and licensed dentists.
Overjet Calculus Assist is a module within the Overjet Platform. The Overjet Calculus Assist (OCalA) software automatically detects interproximal calculus on bitewing and periapical radiographs. It is intended to aid dentists in the detection of calculus. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by professionally trained and licensed dentists.
Here's an analysis of the acceptance criteria and study findings for the Overjet Calculus Assist device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
While specific acceptance criteria thresholds are not explicitly stated as numerical values in the document (e.g., "Sensitivity must be >= X%"), the document describes the performance testing conducted and implies that these results met the pre-specified requirements. The performance presented is what the FDA reviewed and deemed acceptable for clearance.
Metric (Type of Test) | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Standalone Performance | Meets pre-specified requirements for sensitivity and specificity in calculus detection. | Sensitivity: |
- Bitewing: 74.1% (95% CI: 66.2%, 82.0%)
- Periapical: 72.9% (95% CI: 65.3%, 80.5%)
Specificity: - Bitewing: 99.4% (95% CI: 99.1%, 99.6%)
- Periapical: 99.6% (95% CI: 99.3%, 99.8%)
AFROC AUC: - Bitewing: 0.859 (95% CI: 0.823, 0.894)
- Periapical: 0.867 (95% CI: 0.828, 0.903) |
| Clinical Performance (Reader Improvement) | Demonstrates superiority of aided reader performance versus unaided reader performance. | Reader Sensitivity (Unassisted vs. Assisted): - Bitewing: Improved from 74.9% (68.3%, 80.2%) to 84.0% (78.8%, 88.2%)
- Periapical: Improved from 74.7% (69.9%. 79.0%) to 84.4% (78.8%, 89.2%)
Reader Specificity (Unassisted vs. Assisted): - Bitewing: Decreased slightly from 98.8% (98.7%, 99.0%) to 98.6% (98.4%, 98.9%)
- Periapical: Decreased slightly from 98.1% (97.8%, 98.4%) to 98.0% (97.7%, 98.4%)
Reader AFROC AUC (Unassisted vs. Assisted - Average of all readers): - Bitewing: Increased from 0.840 (0.800, 0.880) to 0.878 (0.844. 0.913) (p-value 0.0055)
- Periapical: Increased from 0.846 (0.808. 0.884) to 0.900 (0.870, 0.929) (p-value 1.47e-05) |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Standalone Test Set:
- Bitewing Radiographs: 296
- Periapical Radiographs: 322
- Total Surfaces (Bitewing): 6,121
- Total Surfaces (Periapical): 3,595
- Data Provenance: Not explicitly stated, but subgroup analyses for "sensor" and "clinical site" suggest real-world, diverse data. The document does not specify if the data was retrospective or prospective, or the country of origin.
-
Clinical Evaluation (Reader Improvement) Test Set:
- Bitewing Radiographs: 292 (85 with calculus, 211 without calculus)
- Periapical Radiographs: 322 (89 with calculus, 233 without calculus)
- Data Provenance: Not explicitly stated regarding retrospective/prospective or geographical origin.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Ground Truth Establishment for Clinical Evaluation Test Set:
- Number of Experts: 3 US-licensed dentists formed a consensus for initial labeling. An oral radiologist provided adjudication for non-consensus labels.
- Qualifications of Experts: "US-licensed dentists" and an "oral radiologist." Specific years of experience or specialization within dentistry beyond "oral radiologist" are not provided.
4. Adjudication Method for the Test Set
- Clinical Evaluation Test Set Adjudication:
- Ground truth was established by consensus labels of three US-licensed dentists.
- Non-consensus labels were adjudicated by an oral radiologist. This effectively represents a 3-reader consensus with a 1-reader expert adjudication for disagreements.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance?
- Yes, an MRMC comparative effectiveness study was done. It was described as a "multi-reader, fully crossed reader improvement study."
- Effect Size (Improvement with AI vs. without AI assistance):
- Sensitivity Improvement:
- Bitewing: 9.1% (84.0% - 74.9%)
- Periapical: 9.7% (84.4% - 74.7%)
- AFROC AUC Improvement (Reader Average):
- Bitewing: 0.038 (0.878 - 0.840), with a p-value of 0.0055 (statistically significant)
- Periapical: 0.054 (0.900 - 0.846), with a p-value of 1.47e-05 (statistically significant)
- Specificity: There was a slight decrease in specificity (0.1-0.2%) when assisted, which is common in CADe systems where increased sensitivity might lead to a minor trade-off in specificity.
- Sensitivity Improvement:
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Yes, a standalone performance test was conducted.
- The results are detailed in the "Standalone Testing" section, including sensitivity, specificity, and AFROC AUC for the AI algorithm alone.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
- For both Standalone and Clinical Evaluation Studies:
- The ground truth was established by expert consensus of US-licensed dentists, with adjudication by an oral radiologist for disagreements. This is a type of "expert consensus" ground truth. The document does not mention pathology or outcomes data.
8. The Sample Size for the Training Set
- The document does not provide the sample size of the training set for the AI model. It only details the test set used for performance evaluation.
9. How the Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established. It only describes the ground truth methodology for the test set used in performance validation.
Ask a specific question about this device
Page 1 of 1