Search Results
Found 7 results
510(k) Data Aggregation
(245 days)
Pearl Inc.
Second Opinion® CS is a computer aided detection ("CADe") software to aid in the detection and segmentation of caries in periapical radiographs.
It is designed to aid dental health professionals to review periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.
Second Opinion CS detects suspected carious lesions and presents them as an overlay of segmented contours. The software highlights detected caries with an overlay and provides a detailed analysis of the lesion's overlap with dentine and enamel, presented as a percentage. The output of Second Opinion CS is a visual overlay of regions of the input radiograph which have been detected as potentially containing caries. The user can hover over the caries detection to see the segmentation analysis.
Second Opinion PC consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion CS uses machine learning to detect and segment caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected carious lesions are displayed as overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected carious lesions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® CS:
Acceptance Criteria and Reported Device Performance
Criteria | Reported Device Performance (Standalone Study) | Reported Device Performance (MRMC Study) |
---|---|---|
Primary Endpoint: Overall Caries Detection | Sensitivity: > 70% (Met the primary endpoint). Estimated lesion level sensitivity (95% CI) was 0.88. Statistically significant (Hommel adjusted p-value: 70%. | wAFROC-FOM (aided vs. unaided): Significant improvement of 0.05 (95% CI: 0.01–0.09, adjusted p=0.0345) in caries detection for periapical images. Standalone CAD vs. unaided readers: Outperformed unaided readers for overall caries (higher wAFROC-FOM and sensitivity). |
Secondary Endpoint: Caries Subgroup (Enamel) | Sensitivity: 0.95 (95% CI: 0.92, 0.97) | wAFROC-FOM (aided vs. unaided): 0.04 (95% CI: 0.01, 0.08) |
Secondary Endpoint: Caries Subgroup (Dentin) | Sensitivity: 0.86 (95% CI: 0.81, 0.90) | wAFROC-FOM (aided vs. unaided): 0.05 (95% CI: 0.02, 0.08) |
False Positives Per Image (FPPI) | Enamel: 0.76 (95% CI: 0.70, 0.83) Dentin: 0.48 (95% CI: 0.43, 0.52) | Overall: Increased slightly by 0.16 (95% CI: -0.03–0.36) Enamel: Rose slightly by 0.21 (95% CI: 0.04, 0.37) Dentin: Rose slightly by 0.08 (95% CI: -0.08, 0.23) |
Lesion-level Sensitivity (Aided vs. Unaided) | Not reported for standalone study. | Significant increase of 0.20 (95% CI: 0.16–0.24) overall. Enamel: 0.19 (95% CI: 0.15-0.23) Dentin: 0.20 (95% CI: 0.16-0.25) |
Surface-level Specificity (Aided vs. Unaided) | Not reported for standalone study. | Decreased marginally by -0.02 (95% CI: -0.04–0.00) |
Localization and Segmentation Accuracy | Not explicitly reported as a separate metric but inferred through positive sensitivity for enamel and dentin segmentation. | Measured by Jaccard index, consistent across readers, confirming reliable identification of caries and anatomical structures. |
Overall Safety and Effectiveness | Considered safe and effective, with benefits exceeding risks, meeting design verification, validation, and labeling Special Controls required for Class II medical image analyzers. | The study concludes that the device enhances caries detection and reliably segments anatomical structures, affirming its efficacy as a diagnostic aid. |
Study Details
1. Sample Size for Test Set and Data Provenance
- Standalone Test Set: 1250 periapical radiograph images containing 404 overall caries lesions on 286 abnormal images.
- Provenance: Retrospective. Data was collected from various geographical regions within the United States: Northwest (11.0%), Northeast (18.8%), South (29.2%), West (15.6%), and Midwest (25.5%).
- Demographics: Includes radiographs from females (50.1%), males (44.6%), and unknown gender (5.3%). Age distribution: 12-18 (12.3%), 18-75 (81.7%), and 75+ (6.0%).
- Imaging Devices: Carestream-Trophy KodakRVG6100 (25.7%), Carestream-Trophy RVG5200 (3.2%), Carestream-Trophy RVG6200 (27.0%), DEXIS Platinum (19.2%), DEXIS Titanium (18.8%), KodakTrophy KodakRVG6100 (0.8%), and unknown devices (5.3%).
- MRMC Test Set: 330 radiographs with 508 caries lesions across 179 abnormal images.
- Provenance: Not explicitly stated but inferred to be retrospective, similar to the standalone study, given the focus on existing image characteristics.
2. Number of Experts and Qualifications for Test Set Ground Truth
- Standalone Study: Not explicitly stated for the standalone study. However, the MRMC study description clarifies the method for ground truth establishment, which likely applies to the test set used for standalone evaluation as well.
- MRMC Study: Ground truth was determined by four experienced dentists.
- Qualifications: "U.S.-certified dentists" and "experienced dentists."
3. Adjudication Method for Test Set
- Standalone Study: Not explicitly stated, but implies expert consensus was used to establish ground truth.
- MRMC Study: Consensus was achieved when the Jaccard index was ≥0.4 amongst the four experienced dentists. This indicates a form of consensus-based adjudication where a certain level of agreement on lesion boundaries was required.
4. MRMC Comparative Effectiveness Study
- Yes, a fully crossed multi-reader multi-case (MRMC) study was done.
- Effect Size (Improvement of human readers with AI vs. without AI assistance):
- Overall Caries Detection (wAFROC-FOM): Aided readers showed a significant improvement of 0.05 (95% CI: 0.01–0.09) in wAFROC-FOM compared to unaided readers.
- Lesion-level Sensitivity: Aided readers showed a significant increase of 0.20 (95% CI: 0.16–0.24) in lesion-level sensitivity.
- False Positives Per Image (FPPI): FPPI increased slightly by 0.16 (95% CI: -0.03–0.36).
- Surface-level Specificity: Decreased marginally by -0.02 (95% CI: -0.04–0.00).
5. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance assessment was done to validate the inclusion of a new caries lesion anatomical segmentation.
- Key Results:
- Sensitivity was > 70%, with an estimated lesion level sensitivity of 0.88 (95% CI), which was statistically significant (p
Ask a specific question about this device
(148 days)
Pearl, Inc.
Second Opinion® 3D is a radiological automated image processing software device intended to identify and mark clinically relevant anatomy in dental CBCT radiographs; specifically Dentition, Maxilla, Mandible, Inferior Alveolar Canal and Mental Foramen (IAN), Maxillary Sinus, Nasal space, and airway. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid health professionals to review CBCT radiographs of patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D is a radiological automated image processing software device intended to identify clinically relevant anatomy in CBCT radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid dental health professionals to identify clinically relevant anatomy on CBCT radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.
Second Opinion® 3D consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are uploaded by user
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® 3D uses machine learning to identify areas of interest such as Individual teeth, including implants and bridge pontics; Maxillary Complex; Mandible; Inferior Alveolar Canal and Mental Foramen (defined as IAN); Maxillary Sinus; Nasal Space; Airway. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Masks are displayed as overlays atop the original CBCT radiograph which indicate to the practitioner a clinically relevant anatomy. The clinician can toggle over the image to highlight a particular anatomy.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® 3D:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the statistically significant accuracy thresholds for each anatomy segment that the device aims to identify. While explicit numerical thresholds for "passing" are not provided directly in the table, the text states, "Dentition, Maxilla, Mandible, IAN space, Sinus, Nasal space, and airway passed their individually associated threshold." The performance is reported in terms of the mean Dice Similarity Coefficient (DSC) with a 95% Confidence Interval (CI).
AnatomyID | Anatomy Name | Acceptance Criteria (Implied) | Reported Device Performance (Mean DSC, 95% CI) | Passes Acceptance? |
---|---|---|---|---|
1 | Dentition | Statistically significant accuracy | 0.86 (0.83, 0.89) | Yes |
2 | Maxillary Complex | Statistically significant accuracy | 0.91 (0.91, 0.92) | Yes |
3 | Mandible | Statistically significant accuracy | 0.97 (0.97, 0.97) | Yes |
4 | IAN Canal | Statistically significant accuracy | 0.76 (0.74, 0.78) | Yes |
5 | Maxillary Sinus | Statistically significant accuracy | 0.97 (0.97, 0.98) | Yes |
6 | Nasal Space | Statistically significant accuracy | 0.90 (0.89, 0.91) | Yes |
7 | Airway | Statistically significant accuracy | 0.95 (0.94, 0.96) | Yes |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 100 images
- Data Provenance: Anonymized images representing patients across the United States. It is a retrospective dataset, as it consists of pre-existing images.
3. Number of Experts and Qualifications for Ground Truth Establishment
The document does not explicitly state the "number of experts" or their specific "qualifications" (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. It only mentions that the images were "clinically validated."
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (such as 2+1, 3+1, or none) for establishing the ground truth on the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No, the document describes a "standalone bench performance study" for the device's segmentation accuracy, not a comparative study with human readers involving AI assistance.
- Effect size of human readers with AI vs. without AI assistance: Not applicable, as no MRMC study was performed or reported.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Was a standalone study done? Yes, the study described is a standalone bench performance study of the algorithm's segmentation accuracy. The reported Dice Similarity Coefficient scores reflect the algorithm's performance without human intervention after the initial image processing.
7. Type of Ground Truth Used
The ground truth used for the bench testing was established through "clinical validation" of the anatomical structures. Given that the performance metric is Dice Similarity Coefficient (a measure of overlap with a reference segmentation), the ground truth was most likely expert consensus segmentation or an equivalent high-fidelity reference segmentation created by qualified professionals. The term "clinically validated" implies expert review and agreement.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set. It mentions the use of "machine learning techniques" and "neural network algorithms, developed from open-source models using supervised machine learning techniques," implying a training phase, but the size of the dataset used for this phase is not provided.
9. How the Ground Truth for the Training Set Was Established
The document states that the technology utilizes "supervised machine learning techniques." This implies that the ground truth for the training set was established through manual labeling or segmentation by human experts which then served as the 'supervision' for the machine learning models during their training phase. However, the exact methodology (e.g., number of experts, specific process) is not detailed.
Ask a specific question about this device
(213 days)
Pearl Inc.
Second Opinion® BLE is a radiological automated image processing software device intended to identify and display bone level measurements in bitewing and periapical radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.
Second Opinion BLE is a radiological automated image processing software device intended to identify and display bone level measurements in bitewing and periapical radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.
It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.
Second Opinion BLE consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion BLE uses machine learning to detect bone level measurements. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected bone level measurements are displayed as linear overlays atop the original radiograph which indicate to the practitioner which regions contain which detected potential conditions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA clearance letter for Second Opinion® BLE:
Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria | Reported Device Performance (Bitewing Images) | Reported Device Performance (Periapical Images) |
---|---|---|---|
Precision (for interproximal bone levels) | > 82% | 87% (95% CI: 86%, 88%) | 87% (95% CI: 85%, 89%) |
Recall (for interproximal bone levels) | > 82% | 91% (95% CI: 90%, 92%) | 87% (95% CI: 85%, 89%) |
Mean Absolute Difference (CEJ-bonecrest measurement) |
Ask a specific question about this device
(138 days)
Pearl, Inc.
Second Opinion® Pediatric is a computer aided detection ("CADe") software to aid in the detection of caries in bitewing and periapical radiographs.
The intended patient population of the device is patients aged 4 years and older that have primary or permanent teeth (primary or mixed dentition) and are indicated for dental radiographs.
Second Opinion Pediatric is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion Pediatric consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® Pediatric uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected caries are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain detected caries that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.
Here's a breakdown of the acceptance criteria and study details for the Second Opinion® Pediatric device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Primary Endpoint: Second Opinion® Pediatric sensitivity for caries detection > 75% for bitewing and periapical images. | Lesion Level Sensitivity: 0.87 (87%) with a 95% Confidence Interval (CI) of (0.84, 0.90). The test for sensitivity > 70% was statistically significant (p-value: 0.70. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: 1182 radiographic images, containing 1085 caries lesions on 549 abnormal images.
- Data Provenance: Not specified in the provided document (e.g., country of origin, retrospective or prospective). However, it states it was a "standalone retrospective study."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not specified. The document only mentions "Ground Truth," but details on the experts who established it are absent.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated. The document refers to "Ground Truth" but does not detail how potential disagreements among experts (if multiple were used) were resolved. It previously mentions "consensus truthing method" for the predicate device's study, which might imply a similar approach, but it is not confirmed for the subject device.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not performed for the Second Opinion® Pediatric device (the subject device). The provided text states, "The effectiveness of Second Opinion® Pediatric was evaluated in a standalone performance assessment to validate the CAD." The predicate device description mentions its purpose is to "aid dental health professionals... as a second reader," which implies an assistive role, but no MRMC data on human reader improvement with AI assistance is provided for either the subject or predicate device.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Study: Yes, a standalone performance assessment was explicitly conducted for the Second Opinion® Pediatric device. The study "assessed the sensitivity of caries detection of Second Opinion® Pediatric compared to the Ground Truth."
-
The type of ground truth used:
- Ground Truth Type: Expert consensus is implied, as the study compared the device's performance against "Ground Truth" typically established by expert review. For the predicate, it explicitly mentions "consensus truthing method." It does not specify if pathology or outcomes data were used.
-
The sample size for the training set:
- Training Set Sample Size: Not specified in the provided document. The document focuses on the validation study.
-
How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not specified in the provided document.
Ask a specific question about this device
(224 days)
Pearl Inc.
Second Opinion PC is a computer aided detection ("CADe") software to aid dentists in the detection of periapical radiolucencies by drawing bounding polygons to highlight the suspected region of interest.
It is designed to aid dental health professionals to review periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.
Second Opinion PC (Periapical Radiolucency Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of periapical radiolucencies on periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion PC consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion PC uses machine learning to detect periapical radiolucencies. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected periapical radiolucencies are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected periapical radiolucencies that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance Study
The Pear Inc. "Second Opinion Periapical Radiolucency Contours" (Second Opinion PC) device aims to aid dentists in detecting periapical radiolucencies using polygonal contours, functioning as a second reader. The device's performance was evaluated through a standalone clinical study demonstrating non-inferiority to its predicate device, which used bounding boxes.
1. Table of Acceptance Criteria and Reported Device Performance
The submission document primarily focuses on demonstrating non-inferiority to the predicate device rather than explicitly stating pre-defined acceptance criteria with specific thresholds for "passing." However, the implicit acceptance criteria are that the device is non-inferior to its predicate (Second Opinion K210365) in detecting periapical radiolucencies when using polygonal contours.
Acceptance Criterion (Implicit) | Reported Device Performance (Second Opinion PC) |
---|---|
Non-inferiority in periapical radiolucency detection accuracy compared to predicate device (Second Opinion K210365) using bounding boxes. | wAFROC-FOM (Estimated Difference): 0.15 (95% CI: 0.10, 0.21) compared to Second Opinion (predicate) |
(Lower bound of 95% CI (0.10) exceeded -0.05, demonstrating non-inferiority at 5% significance level) | |
Overall detection accuracy (wAFROC-FOM) | wAFROC-FOM: 0.85 (95% CI: 0.81, 0.89) |
Overall detection accuracy (HR-ROC-AUC) | HR-ROC-AUC: 0.93 (95% CI: 0.90, 0.96) |
Lesion level sensitivity | Lesion Level Sensitivity: 77% (95% CI: 69%, 84%) |
Average false positives per image | Average False Positives per Image: 0.28 (95% CI: 0.23, 0.33) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 500 unique unannotated periapical radiographs.
- Data Provenance: The dataset is characterized by a representative distribution across:
- Geographical Regions (within the United States):
- Northwest: 116 radiographs (23.2%)
- Southwest: 46 radiographs (9.2%)
- South: 141 radiographs (28.2%)
- East: 84 radiographs (16.8%)
- Midwest: 113 radiographs (22.6%)
- Patient Cohorts (Age Distribution):
- 12-18 years: 4 radiographs (0.8%)
- 18-75 years: 209 radiographs (41.8%)
- 75+ years: 8 radiographs (1.6%)
- Unknown age: 279 radiographs (55.8%)
- Imaging Devices: A variety of devices were used, including Carestream-Trophy (RVG6100, RVG5200, RVG6200), DEXIS (DEXIS, DEXIS Platinum, KaVo Dental Technologies DEXIS Titanium), Kodak-Trophy KodakRVG6100, XDR EV71JU213, and unknown devices.
- Geographical Regions (within the United States):
- Retrospective or Prospective: Not explicitly stated, but the description of "representative distribution" and diverse origins suggests a retrospective collection of existing images.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Four expert readers.
- Qualifications of Experts: Not explicitly stated beyond "expert readers."
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 adjudication).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- Was an MRMC study done? No, a traditional MRMC comparative effectiveness study was not performed for the subject device (Second Opinion PC).
- Effect Size of Human Readers with AI vs. without AI: Not applicable for this specific study of Second Opinion PC. The predicate device (Second Opinion K210365) did undergo MRMC studies, demonstrating statistically significant improvement in aided reader performance for that device. The current study focuses on the standalone non-inferiority of Second Opinion PC compared to its predicate.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Was a standalone study done? Yes, a standalone clinical study was performed. The study compared the performance of Second Opinion PC (polygonal localization) directly with Second Opinion (bounding box localization) in detecting periapical radiolucencies.
- Metrics: wAFROC-FOM and HR-ROC-AUC were used.
- Key Finding: Second Opinion PC was found to be non-inferior to Second Opinion.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. The ground truth (GT) was established by the consensus of at least three out of four expert readers who independently marked periapical radiolucencies using the smallest possible polygonal contour.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not explicitly mentioned in the provided text. The document focuses on the clinical validation (test set).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not explicitly mentioned in the provided text. It is implied that the device was "developed using machine learning techniques" from "open-source models using supervised machine learning," which typically requires a labeled training set, but specifics on its establishment are absent.
Ask a specific question about this device
(146 days)
Pearl Inc.
Second Opinion® CC is a computer aided detection ("CADe") software to aid dentists in the detection of caries by drawing bounding polygons to highlight the suspected region of interest.
It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 19 years of age or older as a second reader.
Second Opinion CC (Caries Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of caries on bitewing and periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion CC consists of three parts:
- · Application Programing Interface ("API")
- · Machine Learning Modules ("ML Modules")
- · Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API 1.
-
- The API routes images to the ML modules
-
- The ML modules produce detection output
-
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion CC uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected carious lesions are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected carious lesions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. In addition, the clinician has the ability to edit the detections as they see fit to align with their diagnosis.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Pearl Inc.'s "Second Opinion CC" device:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the non-inferiority study design. The primary performance metric was the Weighted Alternative Free-Response Receiver Operating Characteristic (wAFROC) Figure of Merit (FOM). The acceptance criterion for the Dice coefficient was explicitly stated.
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Primary Endpoint | ||
wAFROC-FOM difference (Second Opinion CC vs. Second Opinion) | Lower bound of 95% CI for difference > -0.05 (non-inferiority to Second Opinion) | 0.26 (95% CI: 0.22, 0.31) - Lower bound (0.22) exceeds -0.05, demonstrating non-inferiority. |
Secondary Endpoints / Other Metrics | ||
wAFROC-FOM for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported as a measure of efficacy) | 0.81 (95% CI: 0.77, 0.85) |
HR-ROC-AUC for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported as supporting non-inferiority) | 0.88 (95% CI: 0.85, 0.91) |
Lesion Level Sensitivity for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported) | 90% (95% CI: 87%, 94%) |
Average False Positives per image for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported) | 1.34 (95% CI: 1.20, 1.48) |
Dice Coefficient for true positives | Least squares (LS) mean (95% CI) > 0.70 (pre-defined clinically justified acceptance criteria) | LS mean = 0.73 (95% CI: 0.71, 0.75) - Lower bound (0.71) exceeds 0.70. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 500 images
- Data Provenance: The dataset is characterized by a diverse distribution, including:
- Geographical Regions (within the US): Northwest (15.2%), Southwest (17.8%), South (24.6%), East (22.6%), Midwest (19.6%), and 1 unknown origin.
- Gender Distribution: Females (19.0%), Males (25.0%), Other genders (7.6%), and Unknown gender (48.4%).
- Age: 12-18 (1.8%), 18-75 (46.6%), 75+ (2.6%), and Unknown age (49.0%).
- Imaging Devices: Various models from Carestream-Trophy, DEXIS, and KaVo Dental Technologies, along with unknown devices.
- Image Types: 249 periapical radiographs (49.8%) and 251 bitewing radiographs (50.2%).
- Retrospective/Prospective: The document does not explicitly state whether the data was collected retrospectively or prospectively. However, the diverse and "characterized" distribution of various demographic and technical factors, along with the specific mention of "a diverse distribution of radiographs," often suggests a retrospective collection from existing databases for a test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Four expert readers.
- Qualifications of Experts: The document refers to them as "expert readers" with no further specific details on their qualifications (e.g., years of experience, board certification, specialty).
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 or 4/4 consensus).
- "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- MRMC Study: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done for the subject device (Second Opinion CC) in this submission.
- The document explicitly states: "Pearl demonstrated the benefits of the device through a non-inferiority standalone clinical study."
- It also clarifies: "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
- It mentions that the original clearance (K210365) for the predicate device (Second Opinion) was "based on standalone and MRMC studies," but this current submission for Second Opinion CC did not include one.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Study: Yes.
- "Clinical evaluation of Second Opinion CC was performed to validate the efficacy of the system in detecting potential caries lesions using polygons instead of bounding boxes on intraoral radiographs."
- "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
- The results for each image were analyzed for "Non-Lesion Localization (NL)" and "Lesion Localization (LL)" directly by the algorithm's output.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus.
- "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."
- Each GT expert independently marked areas using the smallest possible polygonal contour to encompass the entire region identified.
8. The Sample Size for the Training Set
- The document does not provide the sample size for the training set. It only describes the test set.
9. How the Ground Truth for the Training Set was Established
- The document does not describe how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
Ask a specific question about this device
(389 days)
Pearl Inc.
Second Opinion® is a computer aided detection ("CADe") software to identify and mark regions in relation to suspected dental findings which include Caries, Discrepancy at the margin of an existing restoration, Calculus, Periapical radiolucency, Crown (metal, including zirconia & non-metal), Filling (metal & non-metal), Root canal, Bridge and Implants.
It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.
Second Opinion®is a computer aided detection ("CADe") software device indicated for use by dental health professionals as an aid in their assessment of bitewing and periapical radiographs of permanent teeth in patients 12 years of age or older. Second Opinion® employs computer vision technology, developed using machine learning techniques, to detect and draw attention as second reader to regions on bitewing and periapical radiographs where distinct pathologic and/or nonpathologic dental features may appear.
Second Opinion® consists of three parts:
- In-office application or Client User Interface ("Client")
- Application Programing Interface ("API")
- Computer Vision Models ("CV Model", "CV Models")
The Client resides in the clinician's office. The API and CV Models reside in a cloud computing platform, where image processing takes place.
The CV Models create and append to a metadata file information denoting pixel regions and other associated properties of each radiograph. Those associated properties include:
- Normal anatomy (e.g., Teeth)
- Nine radiological dental findings, which include five restorations (crowns, bridges, implants, root canals, fillings) and four pathologies (caries, marqin discrepancy -MD, calculus, periapical radiolucency - PR)
The API delivers the metadata back to Second Opinion® via the cloud. The metadata information is displayed in graphical form to clinical users by way of the Second Opinion® Client's user interface.
Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Second Opinion® device appear to be primarily based on the demonstrated improvement in reader performance when aided by the device, as well as the device's standalone detection capabilities. The document details the performance metrics used rather than explicitly stating pre-defined "acceptance criteria" with numerical thresholds for all aspects. However, based on the clinical study results and conclusion, the following can be inferred:
Feature/Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Standalone Performance | ||
wAFROC-FOM (Caries, MD, Calculus, PR) | Comparable performance to unaided readers in detecting features based on Jaccard Index (JI) ≥ 0.4 for Lesion Localization. | JI ≥ 0.4: Caries: (0.73, 0.79), MD: (0.71, 0.78), Calculus: (0.78, 0.85), PR: (0.75, 0.84) (95% CI) |
JI ≥ 0.5: Caries: (0.61, 0.68), MD: (0.62, 0.68), Calculus: (0.75, 0.81), PR: (0.69, 0.78) (95% CI) | ||
Standalone Sensitivity | Not explicitly stated as a target, but performance was assessed. | Range: 76.39% – 89.77% |
Standalone False Positive Rate (FPPI) | Not explicitly stated as a target, but performance was assessed. | Range: 0.46 - 4.85 |
MRMC (Aided vs. Unaided) Performance | ||
Aided Reader Accuracy Improvement | Statistically significant improvement over unaided readers for caries, margin discrepancy (MD), calculus, and periapical radiolucency (PR). All pathologies must meet pre-specified endpoints for the MRMC study. No statistically significant reductions in performance. | Statistically significant improvement for Caries, MD, Calculus, and PR. |
Caries wAFROC-FOM: Unaided: 0.740, Aided: 0.758 (P=0.0062) | ||
Sensitivity Improvement: Range 0.9% - 11.7% | ||
Proportion of Readers with improved sensitivity: Caries (68%), MD (76%), Calculus (88%), PR (100%) | ||
FPPI Improvement: Range 0.08 - 0.136 | ||
Proportion of Readers with improved FPPI: Caries (92%), MD (96%), Calculus (100%), PR (36%) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 2,010 images. These images were reviewed by all four Ground Truth (GT) readers for both standalone and MRMC studies.
- Image Composition:
- Caries: 1,640 normal, 370 lesion-containing (655 lesions)
- MD: 1,741 normal, 269 lesion-containing (355 lesions)
- Calculus: 1,766 normal, 244 lesion-containing (467 lesions)
- PR: 1,887 normal, 123 lesion-containing (144 lesions)
- Image Composition:
- Data Provenance: Retrospective, unblinded open-label, multi-site trials. The document does not specify the country of origin.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Four expert readers.
- Qualifications: The document identifies them as "expert readers" in the context of dental radiographs, but does not provide specific qualifications (e.g., years of experience, specialization like radiologist).
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers. Each expert independently marked areas on the radiographs using the smallest possible rectangular bounding box.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the Effect Size of how much Human Readers Improve with AI vs. without AI Assistance
- MRMC Study Done: Yes, a fully-crossed multi-reader, multi-case (MRMC) retrospective reader study was performed.
- Effect Size (Improvement with AI vs. without AI Assistance):
- Overall Detection Accuracy (wAFROC-FOM for Caries): Unaided: 0.740, Aided: 0.758. This difference was significant (P=0.0062).
- Sensitivity Improvement: The improvement in sensitivity for a single dental finding was in the range of 0.9% to 11.7%.
- For Caries, 68% of readers improved sensitivity.
- For MD, 76% of readers improved sensitivity.
- For Calculus, 88% of readers improved sensitivity.
- For PR, 100% of readers improved sensitivity.
- False Positive Rate (FPPI) Improvement: The improvement in false positive rate for a single dental pathology was in the range of 0.08 to 0.136.
- For Caries, 92% of readers improved FPPI.
- For MD, 96% of readers improved FPPI.
- For Calculus, 100% of readers improved FPPI.
- For PR, 36% of readers improved FPPI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Study Done: Yes. The document states: "Second Opinion® was clinically tested as a standalone device and in a fully-crossed multi-case (MRMC) reader study."
7. The Type of Ground Truth Used
- Ground Truth Type: Expert consensus. Specifically, agreement among at least three out of four expert readers who independently marked areas on radiographs using bounding boxes.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set. It mentions the "computer vision models developed using machine learning techniques" and "open-source models using supervised machine learning techniques," implying a training set was used, but the size is not provided in the given text.
9. How the Ground Truth for the Training Set Was Established
- The document states that the computer vision models were developed using "supervised machine learning techniques." This implies that the training set had labels (ground truth) provided. However, the specific method of establishing this ground truth for the training set (e.g., expert consensus like the test set, or a different methodology) is not explicitly detailed in the provided text. It can be inferred that a similar process of expert labeling was used, but it's not confirmed.
Ask a specific question about this device
Page 1 of 1