Search Results
Found 2 results
510(k) Data Aggregation
(148 days)
Second Opinion® 3D is a radiological automated image processing software device intended to identify and mark clinically relevant anatomy in dental CBCT radiographs; specifically Dentition, Maxilla, Mandible, Inferior Alveolar Canal and Mental Foramen (IAN), Maxillary Sinus, Nasal space, and airway. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid health professionals to review CBCT radiographs of patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D is a radiological automated image processing software device intended to identify clinically relevant anatomy in CBCT radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid dental health professionals to identify clinically relevant anatomy on CBCT radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are uploaded by user
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® 3D uses machine learning to identify areas of interest such as Individual teeth, including implants and bridge pontics; Maxillary Complex; Mandible; Inferior Alveolar Canal and Mental Foramen (defined as IAN); Maxillary Sinus; Nasal Space; Airway. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Masks are displayed as overlays atop the original CBCT radiograph which indicate to the practitioner a clinically relevant anatomy. The clinician can toggle over the image to highlight a particular anatomy.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® 3D:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the statistically significant accuracy thresholds for each anatomy segment that the device aims to identify. While explicit numerical thresholds for "passing" are not provided directly in the table, the text states, "Dentition, Maxilla, Mandible, IAN space, Sinus, Nasal space, and airway passed their individually associated threshold." The performance is reported in terms of the mean Dice Similarity Coefficient (DSC) with a 95% Confidence Interval (CI).
| AnatomyID | Anatomy Name | Acceptance Criteria (Implied) | Reported Device Performance (Mean DSC, 95% CI) | Passes Acceptance? |
|---|---|---|---|---|
| 1 | Dentition | Statistically significant accuracy | 0.86 (0.83, 0.89) | Yes |
| 2 | Maxillary Complex | Statistically significant accuracy | 0.91 (0.91, 0.92) | Yes |
| 3 | Mandible | Statistically significant accuracy | 0.97 (0.97, 0.97) | Yes |
| 4 | IAN Canal | Statistically significant accuracy | 0.76 (0.74, 0.78) | Yes |
| 5 | Maxillary Sinus | Statistically significant accuracy | 0.97 (0.97, 0.98) | Yes |
| 6 | Nasal Space | Statistically significant accuracy | 0.90 (0.89, 0.91) | Yes |
| 7 | Airway | Statistically significant accuracy | 0.95 (0.94, 0.96) | Yes |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 100 images
- Data Provenance: Anonymized images representing patients across the United States. It is a retrospective dataset, as it consists of pre-existing images.
3. Number of Experts and Qualifications for Ground Truth Establishment
The document does not explicitly state the "number of experts" or their specific "qualifications" (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. It only mentions that the images were "clinically validated."
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (such as 2+1, 3+1, or none) for establishing the ground truth on the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, the document describes a "standalone bench performance study" for the device's segmentation accuracy, not a comparative study with human readers involving AI assistance.
- Effect size of human readers with AI vs. without AI assistance: Not applicable, as no MRMC study was performed or reported.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Was a standalone study done? Yes, the study described is a standalone bench performance study of the algorithm's segmentation accuracy. The reported Dice Similarity Coefficient scores reflect the algorithm's performance without human intervention after the initial image processing.
7. Type of Ground Truth Used
The ground truth used for the bench testing was established through "clinical validation" of the anatomical structures. Given that the performance metric is Dice Similarity Coefficient (a measure of overlap with a reference segmentation), the ground truth was most likely expert consensus segmentation or an equivalent high-fidelity reference segmentation created by qualified professionals. The term "clinically validated" implies expert review and agreement.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set. It mentions the use of "machine learning techniques" and "neural network algorithms, developed from open-source models using supervised machine learning techniques," implying a training phase, but the size of the dataset used for this phase is not provided.
9. How the Ground Truth for the Training Set Was Established
The document states that the technology utilizes "supervised machine learning techniques." This implies that the ground truth for the training set was established through manual labeling or segmentation by human experts which then served as the 'supervision' for the machine learning models during their training phase. However, the exact methodology (e.g., number of experts, specific process) is not detailed.
Ask a specific question about this device
(138 days)
Second Opinion® Pediatric is a computer aided detection ("CADe") software to aid in the detection of caries in bitewing and periapical radiographs.
The intended patient population of the device is patients aged 4 years and older that have primary or permanent teeth (primary or mixed dentition) and are indicated for dental radiographs.
Second Opinion Pediatric is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion Pediatric consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® Pediatric uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected caries are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain detected caries that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.
Here's a breakdown of the acceptance criteria and study details for the Second Opinion® Pediatric device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Primary Endpoint: Second Opinion® Pediatric sensitivity for caries detection > 75% for bitewing and periapical images. | Lesion Level Sensitivity: 0.87 (87%) with a 95% Confidence Interval (CI) of (0.84, 0.90). The test for sensitivity > 70% was statistically significant (p-value: <0.0001). |
| Secondary Endpoints: (Supported the primary endpoint, specific metrics are below) | False Positives Per Image (FPPI): 1.22 (95% CI: 1.14, 1.30) |
| Weighted Alternative Free-Response Receiver Operating Characteristic (wAFROC) Figure of Merit (FOM): 0.86 (95% CI: 0.84, 0.88) | |
| Highest-Ranking Receiver Operating Characteristic (HR-ROC) FOM: 0.94 (95% CI: 0.93, 0.96) | |
| Lesion Segment Mean Dice Score: 0.76 (95% CI: 0.75, 0.77). The lower bound of the CI (0.75) is > 0.70. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: 1182 radiographic images, containing 1085 caries lesions on 549 abnormal images.
- Data Provenance: Not specified in the provided document (e.g., country of origin, retrospective or prospective). However, it states it was a "standalone retrospective study."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not specified. The document only mentions "Ground Truth," but details on the experts who established it are absent.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated. The document refers to "Ground Truth" but does not detail how potential disagreements among experts (if multiple were used) were resolved. It previously mentions "consensus truthing method" for the predicate device's study, which might imply a similar approach, but it is not confirmed for the subject device.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not performed for the Second Opinion® Pediatric device (the subject device). The provided text states, "The effectiveness of Second Opinion® Pediatric was evaluated in a standalone performance assessment to validate the CAD." The predicate device description mentions its purpose is to "aid dental health professionals... as a second reader," which implies an assistive role, but no MRMC data on human reader improvement with AI assistance is provided for either the subject or predicate device.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Study: Yes, a standalone performance assessment was explicitly conducted for the Second Opinion® Pediatric device. The study "assessed the sensitivity of caries detection of Second Opinion® Pediatric compared to the Ground Truth."
-
The type of ground truth used:
- Ground Truth Type: Expert consensus is implied, as the study compared the device's performance against "Ground Truth" typically established by expert review. For the predicate, it explicitly mentions "consensus truthing method." It does not specify if pathology or outcomes data were used.
-
The sample size for the training set:
- Training Set Sample Size: Not specified in the provided document. The document focuses on the validation study.
-
How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not specified in the provided document.
Ask a specific question about this device
Page 1 of 1