Search Results
Found 29 results
510(k) Data Aggregation
(174 days)
MYN
Ask a specific question about this device
(245 days)
MYN
Second Opinion® CS is a computer aided detection ("CADe") software to aid in the detection and segmentation of caries in periapical radiographs.
It is designed to aid dental health professionals to review periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.
Second Opinion CS detects suspected carious lesions and presents them as an overlay of segmented contours. The software highlights detected caries with an overlay and provides a detailed analysis of the lesion's overlap with dentine and enamel, presented as a percentage. The output of Second Opinion CS is a visual overlay of regions of the input radiograph which have been detected as potentially containing caries. The user can hover over the caries detection to see the segmentation analysis.
Second Opinion PC consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion CS uses machine learning to detect and segment caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected carious lesions are displayed as overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected carious lesions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® CS:
Acceptance Criteria and Reported Device Performance
Criteria | Reported Device Performance (Standalone Study) | Reported Device Performance (MRMC Study) |
---|---|---|
Primary Endpoint: Overall Caries Detection | Sensitivity: > 70% (Met the primary endpoint). Estimated lesion level sensitivity (95% CI) was 0.88. Statistically significant (Hommel adjusted p-value: 70%. | wAFROC-FOM (aided vs. unaided): Significant improvement of 0.05 (95% CI: 0.01–0.09, adjusted p=0.0345) in caries detection for periapical images. Standalone CAD vs. unaided readers: Outperformed unaided readers for overall caries (higher wAFROC-FOM and sensitivity). |
Secondary Endpoint: Caries Subgroup (Enamel) | Sensitivity: 0.95 (95% CI: 0.92, 0.97) | wAFROC-FOM (aided vs. unaided): 0.04 (95% CI: 0.01, 0.08) |
Secondary Endpoint: Caries Subgroup (Dentin) | Sensitivity: 0.86 (95% CI: 0.81, 0.90) | wAFROC-FOM (aided vs. unaided): 0.05 (95% CI: 0.02, 0.08) |
False Positives Per Image (FPPI) | Enamel: 0.76 (95% CI: 0.70, 0.83) Dentin: 0.48 (95% CI: 0.43, 0.52) | Overall: Increased slightly by 0.16 (95% CI: -0.03–0.36) Enamel: Rose slightly by 0.21 (95% CI: 0.04, 0.37) Dentin: Rose slightly by 0.08 (95% CI: -0.08, 0.23) |
Lesion-level Sensitivity (Aided vs. Unaided) | Not reported for standalone study. | Significant increase of 0.20 (95% CI: 0.16–0.24) overall. Enamel: 0.19 (95% CI: 0.15-0.23) Dentin: 0.20 (95% CI: 0.16-0.25) |
Surface-level Specificity (Aided vs. Unaided) | Not reported for standalone study. | Decreased marginally by -0.02 (95% CI: -0.04–0.00) |
Localization and Segmentation Accuracy | Not explicitly reported as a separate metric but inferred through positive sensitivity for enamel and dentin segmentation. | Measured by Jaccard index, consistent across readers, confirming reliable identification of caries and anatomical structures. |
Overall Safety and Effectiveness | Considered safe and effective, with benefits exceeding risks, meeting design verification, validation, and labeling Special Controls required for Class II medical image analyzers. | The study concludes that the device enhances caries detection and reliably segments anatomical structures, affirming its efficacy as a diagnostic aid. |
Study Details
1. Sample Size for Test Set and Data Provenance
- Standalone Test Set: 1250 periapical radiograph images containing 404 overall caries lesions on 286 abnormal images.
- Provenance: Retrospective. Data was collected from various geographical regions within the United States: Northwest (11.0%), Northeast (18.8%), South (29.2%), West (15.6%), and Midwest (25.5%).
- Demographics: Includes radiographs from females (50.1%), males (44.6%), and unknown gender (5.3%). Age distribution: 12-18 (12.3%), 18-75 (81.7%), and 75+ (6.0%).
- Imaging Devices: Carestream-Trophy KodakRVG6100 (25.7%), Carestream-Trophy RVG5200 (3.2%), Carestream-Trophy RVG6200 (27.0%), DEXIS Platinum (19.2%), DEXIS Titanium (18.8%), KodakTrophy KodakRVG6100 (0.8%), and unknown devices (5.3%).
- MRMC Test Set: 330 radiographs with 508 caries lesions across 179 abnormal images.
- Provenance: Not explicitly stated but inferred to be retrospective, similar to the standalone study, given the focus on existing image characteristics.
2. Number of Experts and Qualifications for Test Set Ground Truth
- Standalone Study: Not explicitly stated for the standalone study. However, the MRMC study description clarifies the method for ground truth establishment, which likely applies to the test set used for standalone evaluation as well.
- MRMC Study: Ground truth was determined by four experienced dentists.
- Qualifications: "U.S.-certified dentists" and "experienced dentists."
3. Adjudication Method for Test Set
- Standalone Study: Not explicitly stated, but implies expert consensus was used to establish ground truth.
- MRMC Study: Consensus was achieved when the Jaccard index was ≥0.4 amongst the four experienced dentists. This indicates a form of consensus-based adjudication where a certain level of agreement on lesion boundaries was required.
4. MRMC Comparative Effectiveness Study
- Yes, a fully crossed multi-reader multi-case (MRMC) study was done.
- Effect Size (Improvement of human readers with AI vs. without AI assistance):
- Overall Caries Detection (wAFROC-FOM): Aided readers showed a significant improvement of 0.05 (95% CI: 0.01–0.09) in wAFROC-FOM compared to unaided readers.
- Lesion-level Sensitivity: Aided readers showed a significant increase of 0.20 (95% CI: 0.16–0.24) in lesion-level sensitivity.
- False Positives Per Image (FPPI): FPPI increased slightly by 0.16 (95% CI: -0.03–0.36).
- Surface-level Specificity: Decreased marginally by -0.02 (95% CI: -0.04–0.00).
5. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance assessment was done to validate the inclusion of a new caries lesion anatomical segmentation.
- Key Results:
- Sensitivity was > 70%, with an estimated lesion level sensitivity of 0.88 (95% CI), which was statistically significant (p
Ask a specific question about this device
(271 days)
MYN
Smile Dx® is a computer-assisted detection (CADe) software designed to aid dentists in the review of digital files of bitewing and periapical radiographs of permanent teeth. It is intended to aid in the detection and segmentation of suspected dental findings which include: caries, periapical radiolucencies (PARL), restorations, and dental anatomy.
Smile Dx® is also intended to aid dentists in the measurement (in millimeter and percentage measurements) of mesial and distal bone levels associated with each tooth.
The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.
Smile Dx® supports both digital and phosphor sensors.
Smile Dx® is a computer assisted detection (CADe) device indicated for use by licensed dentists as an aid in their assessment of bitewing and periapical radiographs of secondary dentition in adult patients. Smile Dx® utilizes machine learning to produce annotations for the following findings:
- Caries
- Periapical radiolucencies
- Bone level measurements (mesial and distal)
- Normal anatomy (enamel, dentin, pulp, and bone)
- Restorations
The provided FDA 510(k) Clearance Letter for Smile Dx® outlines the device's acceptance criteria and the studies conducted to prove it meets those criteria.
Acceptance Criteria and Device Performance
The acceptance criteria are implicitly defined by the performance metrics reported in the "Performance Testing" section. The device's performance is reported in terms of various metrics for both standalone and human-in-the-loop (MRMC) evaluations.
Here's a table summarizing the reported device performance against the implied acceptance criteria:
Table 1: Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Standalone Testing: | ||
Caries Detection | High Dice, Sensitivity | Dice: 0.74 [0.72 0.76] |
Sensitivity (overall): 88.3% [83.5%, 92.6%] | ||
Periapical Radiolucency (PARL) Detection | High Dice, Sensitivity | Dice: 0.77 [0.74, 0.80] |
Sensitivity: 86.1% [80.2%, 91.9%] | ||
Bone Level Detection (Bitewing) | High Sensitivity, Specificity, Low MAE | Sensitivity: 95.5% [94.3%, 96.7%] |
Specificity: 94.0% [91.1%, 96.6%] | ||
MAE: 0.30 mm [0.29mm, 0.32mm] | ||
Bone Level Detection (Periapical) | High Sensitivity, Specificity, Low MAE (percentage) | Sensitivity: 87.3% [85.4%, 89.2%] |
Specificity: 92.1% [89.9%, 94.1%] | ||
MAE: 2.6% [2.4%, 2.8%] | ||
Normal Anatomy Detection | High Dice, Sensitivity, Specificity | Dice: 0.84 [0.83, 0.85] |
Sensitivity (Pixel-level): 86.1% [85.4%, 86.8%] | ||
Sensitivity (Contour-level): 95.2% [94.5%, 96%] | ||
Specificity (Contour-level): 93.5% [91.6%, 95.8%] | ||
Restorations Detection | High Dice, Sensitivity, Specificity | Dice: 0.87 [0.85, 0.90] |
Sensitivity (Pixel-level): 83.1% [80.3%, 86.4%] | ||
Sensitivity (Contour-level): 90.9% [88.2%, 93.9%] | ||
Specificity (Contour-level): 99.6% [99.3%, 99.8%] | ||
MRMC Clinical Evaluation - Reader Improvement: | ||
Caries Detection (wAFROC Δθ) | Statistically significant improvement | +0.127 [0.081, 0.172] (p |
Ask a specific question about this device
(138 days)
MYN
Second Opinion® Pediatric is a computer aided detection ("CADe") software to aid in the detection of caries in bitewing and periapical radiographs.
The intended patient population of the device is patients aged 4 years and older that have primary or permanent teeth (primary or mixed dentition) and are indicated for dental radiographs.
Second Opinion Pediatric is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion Pediatric consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface (UI) ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion® Pediatric uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected caries are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain detected caries that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.
Here's a breakdown of the acceptance criteria and study details for the Second Opinion® Pediatric device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Primary Endpoint: Second Opinion® Pediatric sensitivity for caries detection > 75% for bitewing and periapical images. | Lesion Level Sensitivity: 0.87 (87%) with a 95% Confidence Interval (CI) of (0.84, 0.90). The test for sensitivity > 70% was statistically significant (p-value: 0.70. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: 1182 radiographic images, containing 1085 caries lesions on 549 abnormal images.
- Data Provenance: Not specified in the provided document (e.g., country of origin, retrospective or prospective). However, it states it was a "standalone retrospective study."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not specified. The document only mentions "Ground Truth," but details on the experts who established it are absent.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated. The document refers to "Ground Truth" but does not detail how potential disagreements among experts (if multiple were used) were resolved. It previously mentions "consensus truthing method" for the predicate device's study, which might imply a similar approach, but it is not confirmed for the subject device.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not performed for the Second Opinion® Pediatric device (the subject device). The provided text states, "The effectiveness of Second Opinion® Pediatric was evaluated in a standalone performance assessment to validate the CAD." The predicate device description mentions its purpose is to "aid dental health professionals... as a second reader," which implies an assistive role, but no MRMC data on human reader improvement with AI assistance is provided for either the subject or predicate device.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Study: Yes, a standalone performance assessment was explicitly conducted for the Second Opinion® Pediatric device. The study "assessed the sensitivity of caries detection of Second Opinion® Pediatric compared to the Ground Truth."
-
The type of ground truth used:
- Ground Truth Type: Expert consensus is implied, as the study compared the device's performance against "Ground Truth" typically established by expert review. For the predicate, it explicitly mentions "consensus truthing method." It does not specify if pathology or outcomes data were used.
-
The sample size for the training set:
- Training Set Sample Size: Not specified in the provided document. The document focuses on the validation study.
-
How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not specified in the provided document.
Ask a specific question about this device
(196 days)
MYN
Lung AI software device is a Computer-Aided Detection (CADe) tool designed to assist in the detection of consolidation/atelectasis and pleural effusion during the review of lung ultrasound scans.
The software is an adjunctive tool to alert users to the presence of regions of interest (ROI) with consolidation/atelectasis and pleural effusion within the analyzed lung ultrasound cine clip.
Lung AI is intended to be used on images collected from the PLAPS point, in accordance with the BLUE protocol.
The intended users are healthcare professionals who are trained and qualified in performing lung ultrasound and routinely perform lung ultrasounds as part of their current practice in a point-of-care environment—namely Emergency Departments (EDs). The device was not designed and tested with use environments representing EMTs and military medics.
Lung AI is not intended for clinical diagnosis and does not replace the healthcare provider's judgment or other diagnostic tests in the standard care for lung ultrasound findings. All cases where a Chest CT scan and/or Chest X-ray is part of the standard of care should undergo these imaging procedures, irrespective of the device output.
The software is indicated for adults only.
Lung AI is a Computer-Aided Detection (CADe) tool designed to assist in the analysis of lung ultrasound images by suggesting the presence of consolidation/atelectasis and pleural effusion in a scan. This adjunctive tool is intended to aid users to detect the presence of regions of interest (ROI) with consolidation/atelectasis and pleural effusion. However, the device does not provide a diagnosis for any disease nor replace any diagnostic testing in the standard of care.
The lung AI module processes Ultrasound cine clips and flags any evidence of pleural effusion and/or consolidation/atelectasis present without aggregating data across regions or making any patient-level decisions. For positive cases, a single ROI per clip from a frame with the largest pleural effusion (or consolidation/atelectasis) is generated as part of the device output. Moreover, the ROI output is for visualization only and should not be relied on for precise anatomical localization. The final decision regarding the overall assessment of the information from all regions/clips remains the responsibility of the user. Lung AI is intended to be used on clips collected only from the PLAPS point, in accordance with the BLUE protocol.
Lung AI is developed as a module to be integrated by another computer programmer into their legally marketed ultrasound imaging device. The software integrates with third-party ultrasound imaging devices and functions as a post-processing tool. The software does not include a built-in viewer; instead, it works within the existing third-party device interface.
Lung AI is validated to meet applicable safety and efficacy requirements and to be generalizable to image data sourced from ultrasound transducers of a specific frequency range.
The device is intended to be used on images of adult patients undergoing point-of-care (POC) lung ultrasound scans in the emergency departments due to suspicion of pleural effusion and/or consolidation/atelectasis. It is important to note that patient management decisions should not be made solely on the results of the Lung AI analysis.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for Lung AI (LAI001).
Acceptance Criteria and Device Performance for Lung AI (LAI001)
The Lung AI (LAI001) device underwent both standalone performance evaluation and a multi-reader, multi-case (MRMC) study to demonstrate its safety and effectiveness.
1. Table of Acceptance Criteria and Reported Device Performance
The document specifies performance metrics based on the standalone evaluation (sensitivity and specificity for detection and localization) and the MRMC study (AUC, sensitivity, and specificity for human reader performance with and without AI assistance). The acceptance criteria for the MRMC study are explicitly stated as an improvement of at least 2% in overall reader performance (AUC-ROC).
Standalone Performance Metrics (Derived from "Summary of Lung AI performance" and "Summary of Lung AI localization performance")
Lung Finding | Metric & Acceptance Criteria (Implicit) | Reported Device Performance (Mean) | 95% Confidence Interval |
---|---|---|---|
Detection | |||
Pleural Effusion | Sensitivity (Se) $\ge$ X.XX | 0.97 | 0.94 – 0.99 |
Pleural Effusion | Specificity (Sp) $\ge$ X.XX | 0.91 | 0.87 – 0.96 |
Consolidation/Atelect. | Sensitivity (Se) $\ge$ X.XX | 0.97 | 0.94 – 0.99 |
Consolidation/Atelect. | Specificity (Sp) $\ge$ X.XX | 0.94 | 0.90 – 0.98 |
Localization | |||
Pleural Effusion | Sensitivity (Se) $\ge$ X.XX (IoU $\ge$ 0.5) | 0.85 | 0.80 – 0.89 |
Pleural Effusion | Specificity (Sp) $\ge$ X.XX (IoU $\ge$ 0.5) | 0.91 | 0.87 – 0.96 |
Consolidation/Atelect. | Sensitivity (Se) $\ge$ X.XX (IoU $\ge$ 0.5) | 0.86 | 0.81 – 0.90 |
Consolidation/Atelect. | Specificity (Sp) $\ge$ X.XX (IoU $\ge$ 0.5) | 0.94 | 0.90 – 0.98 |
Note: Specific numerical acceptance criteria for standalone performance are not explicitly stated in the document, but the reported values demonstrated meeting the required performance for FDA clearance.
MRMC Study Acceptance Criteria and Reported Device Performance
Lung Finding | Metric | Acceptance Criteria | Reported Device Performance (Mean) | 95% Confidence Interval |
---|---|---|---|---|
Pleural Effusion | ||||
AUC-ROC Improvement | ΔAUC-PLEFF $\ge$ 0.02 | 0.035 | 0.025 – 0.047 | |
Sensitivity (Se) Unaided | N/A | 0.71 | 0.68 – 0.75 | |
Sensitivity (Se) Aided | N/A | 0.88 | 0.86 – 0.92 | |
Specificity (Sp) Unaided | N/A | 0.96 | 0.95 – 0.97 | |
Specificity (Sp) Aided | N/A | 0.93 | 0.88 – 0.95 | |
Consolidation/Atelectasis | ||||
AUC-ROC Improvement | ΔAUC-CONS $\ge$ 0.02 | 0.028 | 0.0201 – 0.0403 | |
Sensitivity (Se) Unaided | N/A | 0.73 | 0.72 – 0.80 | |
Sensitivity (Se) Aided | N/A | 0.89 | 0.88 – 0.93 | |
Specificity (Sp) Unaided | N/A | 0.92 | 0.88 – 0.93 | |
Specificity (Sp) Aided | N/A | 0.91 | 0.87 – 0.93 |
2. Sample Size and Data Provenance for Test Set
-
Sample Size for Standalone Test Set: 465 lung scans from 359 unique patients.
-
Data Provenance: Retrospectively collected from 6 imaging centers in the U.S. and Canada, with more than 50% of the data coming from U.S. centers. The dataset was enriched with abnormal cases (at least 30% abnormal per center) and included diverse demographic variables (gender, age 21-96, ethnicity).
-
Sample Size for MRMC Test Set: 322 unique patients (cases). Each of the 6 readers analyzed 748 cases per reading period, for a total of 4488 cases overall.
3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)
- Number of Experts: Two US board-certified experts initially, with a third expert for adjudication.
- Qualifications of Experts: Experienced in point-of-care ultrasound, reading lung ultrasound scans, and diagnostic radiology.
4. Adjudication Method for Test Set
- Method: In cases of disagreement between the first two experts, a third expert provided adjudication. This is a "2+1" (primary readers + adjudicator) method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: Yes, an MRMC study was conducted.
- Effect Size of Improvement:
- Pleural Effusion:
- AUC improved by 0.035 (ΔAUC-PLEFF = 0.035) when aided by the device.
- Sensitivity improved by 0.18 (ΔSe-PLEFF = 0.18) when aided by the device.
- Specificity slightly decreased by -0.03 when aided by the device.
- Consolidation/Atelectasis:
- AUC improved by 0.028 (ΔAUC-CONS = 0.028) when aided by the device.
- Sensitivity improved by 0.16 (ΔSp-CONS = 0.16) when aided by the device.
- Specificity slightly decreased by -0.008 when aided by the device.
- Pleural Effusion:
6. Standalone (Algorithm Only) Performance Study
- Was it done?: Yes, the "Bench Testing" section describes a standalone performance evaluation.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus (established by two US board-certified experts with a third adjudicator) for the presence/absence of consolidation/atelectasis and pleural effusion per cine clip. They also provided bounding box annotations for localization ground truth.
8. Sample Size for Training Set
- Sample Size: 3,453 ultrasound cine clips from 1,036 patients.
9. How Ground Truth for Training Set Was Established
- The document states that the underlying deep learning models were "trained on a diverse dataset of 3,453 ultrasound cine clips from 1,036 patients." While it doesn't explicitly detail the process for establishing ground truth for the training set, it can be inferred that a similar expert review process, likely involving radiologists or expert sonographers, was used, as is standard practice for supervised deep learning in medical imaging. The clinical confounders mentioned (Pneumonia, Pulmonary Embolism, CHF, Tamponade, Covid19, ARDS, COPD) suggest a robust labeling process to differentiate findings.
Ask a specific question about this device
(224 days)
MYN
Second Opinion PC is a computer aided detection ("CADe") software to aid dentists in the detection of periapical radiolucencies by drawing bounding polygons to highlight the suspected region of interest.
It is designed to aid dental health professionals to review periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.
Second Opinion PC (Periapical Radiolucency Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of periapical radiolucencies on periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion PC consists of three parts:
- Application Programing Interface ("API")
- Machine Learning Modules ("ML Modules")
- Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API
- The API routes images to the ML modules
- The ML modules produce detection output
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion PC uses machine learning to detect periapical radiolucencies. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected periapical radiolucencies are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected periapical radiolucencies that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance Study
The Pear Inc. "Second Opinion Periapical Radiolucency Contours" (Second Opinion PC) device aims to aid dentists in detecting periapical radiolucencies using polygonal contours, functioning as a second reader. The device's performance was evaluated through a standalone clinical study demonstrating non-inferiority to its predicate device, which used bounding boxes.
1. Table of Acceptance Criteria and Reported Device Performance
The submission document primarily focuses on demonstrating non-inferiority to the predicate device rather than explicitly stating pre-defined acceptance criteria with specific thresholds for "passing." However, the implicit acceptance criteria are that the device is non-inferior to its predicate (Second Opinion K210365) in detecting periapical radiolucencies when using polygonal contours.
Acceptance Criterion (Implicit) | Reported Device Performance (Second Opinion PC) |
---|---|
Non-inferiority in periapical radiolucency detection accuracy compared to predicate device (Second Opinion K210365) using bounding boxes. | wAFROC-FOM (Estimated Difference): 0.15 (95% CI: 0.10, 0.21) compared to Second Opinion (predicate) |
(Lower bound of 95% CI (0.10) exceeded -0.05, demonstrating non-inferiority at 5% significance level) | |
Overall detection accuracy (wAFROC-FOM) | wAFROC-FOM: 0.85 (95% CI: 0.81, 0.89) |
Overall detection accuracy (HR-ROC-AUC) | HR-ROC-AUC: 0.93 (95% CI: 0.90, 0.96) |
Lesion level sensitivity | Lesion Level Sensitivity: 77% (95% CI: 69%, 84%) |
Average false positives per image | Average False Positives per Image: 0.28 (95% CI: 0.23, 0.33) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 500 unique unannotated periapical radiographs.
- Data Provenance: The dataset is characterized by a representative distribution across:
- Geographical Regions (within the United States):
- Northwest: 116 radiographs (23.2%)
- Southwest: 46 radiographs (9.2%)
- South: 141 radiographs (28.2%)
- East: 84 radiographs (16.8%)
- Midwest: 113 radiographs (22.6%)
- Patient Cohorts (Age Distribution):
- 12-18 years: 4 radiographs (0.8%)
- 18-75 years: 209 radiographs (41.8%)
- 75+ years: 8 radiographs (1.6%)
- Unknown age: 279 radiographs (55.8%)
- Imaging Devices: A variety of devices were used, including Carestream-Trophy (RVG6100, RVG5200, RVG6200), DEXIS (DEXIS, DEXIS Platinum, KaVo Dental Technologies DEXIS Titanium), Kodak-Trophy KodakRVG6100, XDR EV71JU213, and unknown devices.
- Geographical Regions (within the United States):
- Retrospective or Prospective: Not explicitly stated, but the description of "representative distribution" and diverse origins suggests a retrospective collection of existing images.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Four expert readers.
- Qualifications of Experts: Not explicitly stated beyond "expert readers."
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 adjudication).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- Was an MRMC study done? No, a traditional MRMC comparative effectiveness study was not performed for the subject device (Second Opinion PC).
- Effect Size of Human Readers with AI vs. without AI: Not applicable for this specific study of Second Opinion PC. The predicate device (Second Opinion K210365) did undergo MRMC studies, demonstrating statistically significant improvement in aided reader performance for that device. The current study focuses on the standalone non-inferiority of Second Opinion PC compared to its predicate.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Was a standalone study done? Yes, a standalone clinical study was performed. The study compared the performance of Second Opinion PC (polygonal localization) directly with Second Opinion (bounding box localization) in detecting periapical radiolucencies.
- Metrics: wAFROC-FOM and HR-ROC-AUC were used.
- Key Finding: Second Opinion PC was found to be non-inferior to Second Opinion.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. The ground truth (GT) was established by the consensus of at least three out of four expert readers who independently marked periapical radiolucencies using the smallest possible polygonal contour.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not explicitly mentioned in the provided text. The document focuses on the clinical validation (test set).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not explicitly mentioned in the provided text. It is implied that the device was "developed using machine learning techniques" from "open-source models using supervised machine learning," which typically requires a labeled training set, but specifics on its establishment are absent.
Ask a specific question about this device
(103 days)
MYN
Rayvolve LN is a computer-aided detection software device to identify and mark regions in relation to suspected pulmonary nodules from 6 to 30mm size. It is designed to aid radiologists in reviewing the frontal (AP/PA) chest radiographs of patients of 18 years of age or older acquired on digital radiographic systems as a second reader and be used with any DICOM Node server. Rayvolve LN provides adjunctive information only and is not a substitute for the original chest radiographic image.
The medical device is called Rayvolve LN. Rayvolve LN is one of the verticals of the Rayvolve product line. It is a standalone software that uses deep learning techniques to detect and localize pulmonary nodules on chest X-rays. Rayvolve LN is intended to be used as an aided-diagnosis device and does not operate autonomously.
Rayvolve LN has been developed to use the current edition of the DICOM image standard. DICOM is the international standard for transmitting, storing, retrieving, printing, processing, and displaying medical imaging.
Using the DICOM standard allows Rayvolve LN to interact with existing DICOM Node servers (eg.: PACS) and clinical-grade image viewers. The device is designed for running on-premise, cloud platform, connected to the radiology center local network, and can interact with the DICOM Node server.
When remotely connected to a medical center DICOM Node server, Rayvolve LN directly interacts with the DICOM files to output the prediction (potential presence of pulmonary nodules) the original image appears first, followed by the image processed by Rayvolve.
Rayvolve LN does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve LN's use.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The document doesn't explicitly list a table of acceptance criteria in the sense of predefined thresholds for performance metrics. Instead, it describes a comparative study where the acceptance criterion is superiority to unaided readers and comparability to a predicate device. The reported device performance is then presented as the outcomes of these studies.
However, we can infer the performance metrics used for evaluation.
Inferred Acceptance Criteria & Reported Device Performance:
Performance Metric | Acceptance Criteria (Implied) | Rayvolve LN Performance (Unaided) | Rayvolve LN Performance (Aided) | Standalone Rayvolve LN Performance |
---|---|---|---|---|
Reader AUC (Diagnostic Accuracy) | Superior to unaided reader performance; comparable to predicate. | 0.8071 | 0.8583 | Not directly applicable |
Reader Sensitivity (per image) | Significantly improved from unaided reader. | 0.7975 | 0.8935 | Not directly applicable |
Reader Specificity (per image) | Improved from unaided reader. | 0.8235 | 0.8510 | Not directly applicable |
Standalone Sensitivity | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8847 |
Standalone Specificity | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8294 |
Standalone AUC (ROC) | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8408 |
Note: The direct "acceptance criteria" are implied by the study's primary and secondary objectives (i.e., improvement over unaided reading and comparability to a predicate device). The tables above synthesize the key performance metrics reported.
Study Details:
1. Sample Sizes and Data Provenance:
- Test Set (Standalone Performance): 2181 radiographs. The data provenance is not explicitly stated in terms of country of origin, nor whether it was retrospective or prospective. It is described as "all the study types and views in the indication for use."
- Test Set (Clinical Data - MRMC Study): 400 cases. These cases were "randomly sampled from the validation dataset used for the standalone performance study," implying they are a subset of the 2181 radiographs mentioned above.
- Training Set: The sample size for the training set is not provided in the document.
2. Number of Experts for Ground Truth & Qualifications:
- Number of Experts: The document does not explicitly state the number of experts used to establish the ground truth for the test set. It mentions "ground truth binary labeling indicating the presence or absence of pulmonary nodules" for the MRMC study but doesn't detail how this ground truth was derived.
- Qualifications of Experts: Not specified.
3. Adjudication Method for the Test Set:
- The adjudication method for establishing ground truth is not explicitly detailed. It merely states "ground truth binary labeling indicating the presence or absence of pulmonary nodules."
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Yes, an MRMC study was done.
- Effect Size of Improvement:
- Reader AUC: Improved from 0.8071 (unaided) to 0.8583 (aided), a difference of 0.0511. (95% CI: 0.0501; 0.0518)
- Reader Sensitivity (per image): Improved from 0.7975 (unaided) to 0.8935 (aided), a difference of 0.096.
- Reader Specificity (per image): Improved from 0.8235 (unaided) to 0.8510 (aided), a difference of 0.0275.
5. Standalone Performance (Algorithm Only):
- Yes, a standalone performance assessment was done.
- Reported Metrics:
- Sensitivity: 0.8847 (95% CI: 0.8638; 0.9028)
- Specificity: 0.8294 (95% CI: 0.8066; 0.9028)
- AUC: 0.8408 (95% Bootstrap CI: 0.8272; 0.8548)
6. Type of Ground Truth Used:
- The ground truth for both the standalone and MRMC studies was described as "ground truth binary labeling indicating the presence or absence of pulmonary nodules." It does not specify if this was expert consensus, pathology, or outcomes data. However, the context of detecting nodules on chest radiographs for radiologists implies expert consensus as the most probable method.
7. Sample Size for the Training Set:
- Not provided in the document.
8. How Ground Truth for the Training Set was Established:
- Not provided in the document. The document only mentions that the device uses "deep learning techniques" and "supervised Deep learning," which implies labeled training data was used, but details on its establishment are absent.
Ask a specific question about this device
(270 days)
MYN
Better Diagnostics Caries Assist is a radiological, automated, concurrent read, CADe software intended to identify and localize carious lesions on bitewings and periapical radiographs acquired from patients aged 18 years or older. Better Diagnostics Caries assist is indicated for use by board licensed dentists. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history or actual in vivo clinical assessment.
Better Diagnostics Caries Assist (BDCA) Version 1.0 is a computer-aided detection (CADe) software designed for the automated detection of carious lesions in Bitewings and periapical dental radiographs. This software offers supplementary information to assist clinicians in their diagnosis of potentially carious tooth surfaces. It is important to note that BDCA v1.0 is not meant to replace a comprehensive clinical evaluation by a clinician, which should consider other pertinent information from the image, the patient's medical history and clinical examination. This software is intended for use in identifying carious lesions in permanent teeth of patients who are 18 years or older.
BDCA v1.0 does not make treatment recommendations or provide a diagnosis. Dentists should review images annotated by BDCA v1.0 concurrently with original, unannotated images before making the final diagnosis on a case. BDCA v1.0 is an adjunct tool and does not replace the role of the dentist. The CAD generated output should not be used as the primary interpretation by the dentists. BDCA v1.0 is not designed to detect conditions other than the following: Caries.
BDCA v1.0 comprises four main components:
- Presentation Layer: This component includes "AI Results Screen" a web-based interface (user interface) that allows users to view AI marked annotations. This is a custom code provided by Better Diagnostics AI Corp to Dental PMS customers. User Interface uses Angular.js and node.js technology to show images on the "AI Results Screen". System can process PNG, BMP and JPG format images. All images are converted into JPEG format for processing. Computer Vision Models returns AI annotations and co-ordinates to the business layer. Business layer sends coordinates to the presentation layer and bounding boxes are drawn on the image using custom code written in Angular.js and node.js. Dentists can view, accept or reject the annotations based on his evaluation. Better Diagnostics AI provides UI code to customers e.g. dental practice management software and imaging firms for utilization of BCDA v1.0 software.
- Application Programming Interface (API): APIs are a set of definitions and protocols for building and integrating application software. It's sometimes referred to as a contract between an information provider and an information user. BDCA v1.0 APIs connect the Dental PMS with the business layer. API receives images input from Dental PMS and passes it to the business layer. It also receives annotations and co-ordinates from the business layer and passes it to the presentation layer hosted by Dental PMS.
- Business Layer: Receives image from the API Gateway and passes it to computer vision models. It also receives the bounding boxes coordinates from the model and retrieves images from the cloud storage. It sends all information to the "AI Results screen" to display rectangle bounding boxes.
- Computer Vision Models (CV Models): These models are hosted on a cloud computing platform and are responsible for image processing. They provide a binary indication to determine the presence of carious findings. If carious findings are detected, the software will output the coordinates of the bounding boxes for each finding. If no carious lesions are found, the output will not contain any bounding boxes and will have a message stating "No Suspected: Caries Detected"
AI models have three parts:
- Pre-Processing Module: Standardization of image to specific height and width to maintain consistency for AI model. Finds out the type of image including IOPA, Bitewings or other types. BDCA v 1.0 can only process Bitewings and IOPA images for patients over age 18. All other types of images will be rejected.
- Core Module: This module provides carious lesion annotations and co-ordinates to draw bounding boxes.
- Post-Processing Module: includes cleanup process to remove outliers/incorrect annotations from the images.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Better Diagnostics Caries Assist (BDCA) Version 1.0.
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as a pass/fail threshold in a dedicated table, but rather embedded in the "predefined performance goals" mentioned in the standalone testing section. The reported device performance is presented against these implicit goals.
Metric (Level) | Predefined Performance Goal (Implicit Acceptance Criteria) | Reported Device Performance (BDCA v1.0) |
---|---|---|
BW Surface Sensitivity | > 0.74 | 89.2% (CI: [86.15%, 92.13%]) |
BW Surface Specificity | > 0.95 | 99.5% (CI: [99.32%, 99.57%]) |
IOPA Surface Sensitivity | > 0.76 | 88.2% (CI: [85.27%, 90.78%]) |
IOPA Surface Specificity | > 0.95 | 99.1% (CI: [98.88%, 99.31%]) |
IOPA Image Sensitivity (Optimistic) | > 0.75 | 91.8% (CI: [88.54%, 94.42%]) |
Note: For image-level sensitivity and specificity, the report provides both "Conservative" and "Optimistic" definitions as per FDA guidance. The "Optimistic" sensitivity for IOPA images had a specific goal. For other image-level metrics and conservative sensitivities, while performance is reported as "robust" and exceeding "performance thresholds," a specific numerical goal isn't explicitly listed in the text. However, the reported values consistently exceed the surface-level goals, implying sufficient performance.
2. Sample Size Used for the Test Set and Data Provenance
From the Standalone Testing section:
- BW Images: 614 images (310 with cavities, 304 without). Within these, 15,687 surfaces were examined (585 positively identified with cavities, 15,102 negative).
- IOPA Images: 684 images (367 with cavities, 317 without). Within these, 9,253 surfaces were examined (618 showed positive results for cavities, 8,635 negative).
- Data Provenance: The document does not explicitly state the country of origin. It indicates that "A patient had at most one BW and one IOPA image included in the analysis dataset." The study states that "Twenty-nine United States (US) licensed dentists" participated in the MRMC study, which may suggest the data is from the US, but this is not definitively stated for the standalone test set. The study type is retrospective as existing images were analyzed.
From the Clinical Evaluation-Reader Improvement (MRMC) Study section:
- Total Radiographs: 328 (108 BW and 220 IOPA).
- BW Images with Cavities: 72 out of 108.
- IOPA Images with Cavities: 91 out of 220.
- BW Surfaces with Cavities: 221 out of 2716.
- IOPA Surfaces with Cavities: 160 out of 2967.
- Data Provenance: The document does not explicitly state the country of origin of the data itself, only that US licensed dentists interpreted them. The study setup implies a retrospective analysis of existing radiographs.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three experienced, licensed dentists.
- Qualifications: Each with over 10 years of professional experience.
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus of two out of three experts (2+1).
- "Ground truth was determined through the consensus of two out of three experienced, licensed dentists... agreeing on the final labels for analysis when at least two dentists identified a surface as carious."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
-
Was an MRMC study done? Yes, a multi-reader, multi-case (MRMC) study was conducted.
-
Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- AFROC Score (AUC):
- BW Images: Aided AUC = 0.848, Unaided AUC = 0.806. (Improvement = +0.042)
- IOPA Images: Aided AUC = 0.845, Unaided AUC = 0.807. (Improvement = +0.038)
- The differences were statistically significant (p-values
- AFROC Score (AUC):
Ask a specific question about this device
(267 days)
MYN
ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.
ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.
ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.
The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.
The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.
The pleural space abnormality ROI category was developed from images with:
- Pleural Effusion that is an abnormal presence of fluid in the pleural space
- Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura
The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.
ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.
For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.
If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:
- Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
- Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
- Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
- Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
- Below the images, a footer with:
- The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
- The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)
If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.
Here's a breakdown of the acceptance criteria and study details for ChestView US:
1. Table of Acceptance Criteria and Reported Device Performance
Standalone Performance (ChestView US)
ROIs | Acceptance Criteria (AUC) | Reported Device Performance (AUC) | 95% Bootstrap CI (AUC) | Acceptance Criteria (Sensitivity @ High-Sensitivity OP) | Reported Device Performance (Sensitivity @ High-Sensitivity OP) | 95% Bootstrap CI (Sensitivity @ High-Sensitivity OP) | Acceptance Criteria (Specificity @ High-Sensitivity OP) | Reported Device Performance (Specificity @ High-Sensitivity OP) | 95% Bootstrap CI (Specificity @ High-Sensitivity OP) | Acceptance Criteria (Sensitivity @ High-Specificity OP) | Reported Device Performance (Sensitivity @ High-Specificity OP) | 95% Bootstrap CI (Sensitivity @ High-Specificity OP) | Acceptance Criteria (Specificity @ High-Specificity OP) | Reported Device Performance (Specificity @ High-Specificity OP) | 95% Bootstrap CI (Specificity @ High-Specificity OP) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NODULE | (Not explicitly stated) | 0.93 | [0.921; 0.938] | (Not explicitly stated) | 0.829 | [0.801; 0.86] | (Not explicitly stated) | 0.956 | [0.948; 0.963] | (Not explicitly stated) | 0.482 | [0.455; 0.518] | (Not explicitly stated) | 0.994 | [0.99; 0.996] |
MEDIASTINUM/HILA ABNORMALITY | (Not explicitly stated) | 0.922 | [0.91; 0.934] | (Not explicitly stated) | 0.793 | [0.739; 0.832] | (Not explicitly stated) | 0.975 | [0.971; 0.98] | (Not explicitly stated) | 0.535 | [0.475; 0.592] | (Not explicitly stated) | 0.992 | [0.99; 0.994] |
CONSOLIDATION | (Not explicitly stated) | 0.952 | [0.947; 0.957] | (Not explicitly stated) | 0.853 | [0.822; 0.879] | (Not explicitly stated) | 0.946 | [0.938; 0.952] | (Not explicitly stated) | 0.61 | [0.583; 0.643] | (Not explicitly stated) | 0.985 | [0.981; 0.989] |
PLEURAL SPACE ABNORMALITY | (Not explicitly stated) | 0.973 | [0.97; 0.975] | (Not explicitly stated) | 0.892 | [0.87; 0.911] | (Not explicitly stated) | 0.965 | [0.958; 0.971] | (Not explicitly stated) | 0.87 | [0.85; 0.896] | (Not explicitly stated) | 0.975 | [0.97; 0.981] |
MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)
ROI Category | Reader Type | Acceptance Criteria (AUC Improvement) | Reported AUC Improvement | 95% Confidence Interval for AUC Improvement | P-value |
---|---|---|---|---|---|
Nodule | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.136 | [0.107, 0.17] |
Ask a specific question about this device
(146 days)
MYN
Second Opinion® CC is a computer aided detection ("CADe") software to aid dentists in the detection of caries by drawing bounding polygons to highlight the suspected region of interest.
It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 19 years of age or older as a second reader.
Second Opinion CC (Caries Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of caries on bitewing and periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.
Second Opinion CC consists of three parts:
- · Application Programing Interface ("API")
- · Machine Learning Modules ("ML Modules")
- · Client User Interface ("Client")
The processing sequence for an image is as follows:
- Images are sent for processing via the API 1.
-
- The API routes images to the ML modules
-
- The ML modules produce detection output
-
- The UI renders the detection output
The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.
Second Opinion CC uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected carious lesions are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected carious lesions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. In addition, the clinician has the ability to edit the detections as they see fit to align with their diagnosis.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Pearl Inc.'s "Second Opinion CC" device:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the non-inferiority study design. The primary performance metric was the Weighted Alternative Free-Response Receiver Operating Characteristic (wAFROC) Figure of Merit (FOM). The acceptance criterion for the Dice coefficient was explicitly stated.
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Primary Endpoint | ||
wAFROC-FOM difference (Second Opinion CC vs. Second Opinion) | Lower bound of 95% CI for difference > -0.05 (non-inferiority to Second Opinion) | 0.26 (95% CI: 0.22, 0.31) - Lower bound (0.22) exceeds -0.05, demonstrating non-inferiority. |
Secondary Endpoints / Other Metrics | ||
wAFROC-FOM for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported as a measure of efficacy) | 0.81 (95% CI: 0.77, 0.85) |
HR-ROC-AUC for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported as supporting non-inferiority) | 0.88 (95% CI: 0.85, 0.91) |
Lesion Level Sensitivity for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported) | 90% (95% CI: 87%, 94%) |
Average False Positives per image for Second Opinion CC | (Not explicitly stated as an acceptance criterion, but reported) | 1.34 (95% CI: 1.20, 1.48) |
Dice Coefficient for true positives | Least squares (LS) mean (95% CI) > 0.70 (pre-defined clinically justified acceptance criteria) | LS mean = 0.73 (95% CI: 0.71, 0.75) - Lower bound (0.71) exceeds 0.70. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 500 images
- Data Provenance: The dataset is characterized by a diverse distribution, including:
- Geographical Regions (within the US): Northwest (15.2%), Southwest (17.8%), South (24.6%), East (22.6%), Midwest (19.6%), and 1 unknown origin.
- Gender Distribution: Females (19.0%), Males (25.0%), Other genders (7.6%), and Unknown gender (48.4%).
- Age: 12-18 (1.8%), 18-75 (46.6%), 75+ (2.6%), and Unknown age (49.0%).
- Imaging Devices: Various models from Carestream-Trophy, DEXIS, and KaVo Dental Technologies, along with unknown devices.
- Image Types: 249 periapical radiographs (49.8%) and 251 bitewing radiographs (50.2%).
- Retrospective/Prospective: The document does not explicitly state whether the data was collected retrospectively or prospectively. However, the diverse and "characterized" distribution of various demographic and technical factors, along with the specific mention of "a diverse distribution of radiographs," often suggests a retrospective collection from existing databases for a test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Four expert readers.
- Qualifications of Experts: The document refers to them as "expert readers" with no further specific details on their qualifications (e.g., years of experience, board certification, specialty).
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 or 4/4 consensus).
- "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- MRMC Study: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done for the subject device (Second Opinion CC) in this submission.
- The document explicitly states: "Pearl demonstrated the benefits of the device through a non-inferiority standalone clinical study."
- It also clarifies: "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
- It mentions that the original clearance (K210365) for the predicate device (Second Opinion) was "based on standalone and MRMC studies," but this current submission for Second Opinion CC did not include one.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Study: Yes.
- "Clinical evaluation of Second Opinion CC was performed to validate the efficacy of the system in detecting potential caries lesions using polygons instead of bounding boxes on intraoral radiographs."
- "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
- The results for each image were analyzed for "Non-Lesion Localization (NL)" and "Lesion Localization (LL)" directly by the algorithm's output.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus.
- "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."
- Each GT expert independently marked areas using the smallest possible polygonal contour to encompass the entire region identified.
8. The Sample Size for the Training Set
- The document does not provide the sample size for the training set. It only describes the test set.
9. How the Ground Truth for the Training Set was Established
- The document does not describe how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
Ask a specific question about this device
Page 1 of 3