Search Results
Found 1 results
510(k) Data Aggregation
(148 days)
Second Opinion**®** 3D
Second Opinion® 3D is a radiological automated image processing software device intended to identify and mark clinically relevant anatomy in dental CBCT radiographs; specifically Dentition, Maxilla, Mandible, Inferior Alveolar Canal and Mental Foramen (IAN), Maxillary Sinus, Nasal space, and airway. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid health professionals to review CBCT radiographs of patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D is a radiological automated image processing software device intended to identify clinically relevant anatomy in CBCT radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid dental health professionals to identify clinically relevant anatomy on CBCT radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are uploaded by user
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® 3D uses machine learning to identify areas of interest such as Individual teeth, including implants and bridge pontics; Maxillary Complex; Mandible; Inferior Alveolar Canal and Mental Foramen (defined as IAN); Maxillary Sinus; Nasal Space; Airway. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Masks are displayed as overlays atop the original CBCT radiograph which indicate to the practitioner a clinically relevant anatomy. The clinician can toggle over the image to highlight a particular anatomy.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® 3D:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the statistically significant accuracy thresholds for each anatomy segment that the device aims to identify. While explicit numerical thresholds for "passing" are not provided directly in the table, the text states, "Dentition, Maxilla, Mandible, IAN space, Sinus, Nasal space, and airway passed their individually associated threshold." The performance is reported in terms of the mean Dice Similarity Coefficient (DSC) with a 95% Confidence Interval (CI).
AnatomyID | Anatomy Name | Acceptance Criteria (Implied) | Reported Device Performance (Mean DSC, 95% CI) | Passes Acceptance? |
---|---|---|---|---|
1 | Dentition | Statistically significant accuracy | 0.86 (0.83, 0.89) | Yes |
2 | Maxillary Complex | Statistically significant accuracy | 0.91 (0.91, 0.92) | Yes |
3 | Mandible | Statistically significant accuracy | 0.97 (0.97, 0.97) | Yes |
4 | IAN Canal | Statistically significant accuracy | 0.76 (0.74, 0.78) | Yes |
5 | Maxillary Sinus | Statistically significant accuracy | 0.97 (0.97, 0.98) | Yes |
6 | Nasal Space | Statistically significant accuracy | 0.90 (0.89, 0.91) | Yes |
7 | Airway | Statistically significant accuracy | 0.95 (0.94, 0.96) | Yes |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 100 images
- Data Provenance: Anonymized images representing patients across the United States. It is a retrospective dataset, as it consists of pre-existing images.
3. Number of Experts and Qualifications for Ground Truth Establishment
The document does not explicitly state the "number of experts" or their specific "qualifications" (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. It only mentions that the images were "clinically validated."
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (such as 2+1, 3+1, or none) for establishing the ground truth on the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, the document describes a "standalone bench performance study" for the device's segmentation accuracy, not a comparative study with human readers involving AI assistance.
- Effect size of human readers with AI vs. without AI assistance: Not applicable, as no MRMC study was performed or reported.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Was a standalone study done? Yes, the study described is a standalone bench performance study of the algorithm's segmentation accuracy. The reported Dice Similarity Coefficient scores reflect the algorithm's performance without human intervention after the initial image processing.
7. Type of Ground Truth Used
The ground truth used for the bench testing was established through "clinical validation" of the anatomical structures. Given that the performance metric is Dice Similarity Coefficient (a measure of overlap with a reference segmentation), the ground truth was most likely expert consensus segmentation or an equivalent high-fidelity reference segmentation created by qualified professionals. The term "clinically validated" implies expert review and agreement.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set. It mentions the use of "machine learning techniques" and "neural network algorithms, developed from open-source models using supervised machine learning techniques," implying a training phase, but the size of the dataset used for this phase is not provided.
9. How the Ground Truth for the Training Set Was Established
The document states that the technology utilizes "supervised machine learning techniques." This implies that the ground truth for the training set was established through manual labeling or segmentation by human experts which then served as the 'supervision' for the machine learning models during their training phase. However, the exact methodology (e.g., number of experts, specific process) is not detailed.
Ask a specific question about this device
Page 1 of 1