Search Results
Found 1 results
510(k) Data Aggregation
(181 days)
DeepTek CXR Analyzer v1.0 is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies using machine learning techniques to identify, categorize, and highlight suspicious ROIs in one of the following categories: Lungs, Pleura, Cardiac, and Hardware. The device is intended for use as a concurrent reading aid for radiologists. DeepTek CXR Analyzer v1.0 is indicated for adults and transitional adolescents (18 to <22 years old but treated like adults) only.
DeepTek CXR Analyzer is a computer-assisted detection (CADe) software device developed to assist radiologists in identifying suspicious regions of interest (ROIs) in the following categories: Lungs, Pleura, Cardiac, and Hardware. DeepTek CXR Analyzer detects suspicious ROIs by analyzing adult frontal chest radiographs using deep learning algorithms and provides relevant annotations to assist radiologists with their interpretations.
The device has an authentication graphical user interface, which allows the user to authenticate themselves. The user can connect the PACS with the DeepTek CXR Analyzer using the configuration interface. The user can enter the PACS AE Title, IP address, Listener and Sender Port number to configure the device. Once the device is configured correctly, DeepTek CXR Analyzer receives chest radiographs from the configured PACS in DICOM format as input. DeepTek CXR Analyzer identifies suspicious ROIs in the following categories: Lungs, Pleura, Cardiac, and Hardware, and sends the secondary capture DICOM with AI output to the same PACS over the DICOM protocol. The output DICOM File Processing component creates a DICOM image containing the original radiograph with a message stating that the image was analyzed by DeepTek CXR Analyzer (with information containing manufacturer name, product name, product version, and a link to user manual) and color-coded bounding boxes containing suspected ROIs. If no suspicious ROIs are detected in the image, the output will not contain any bounding boxes and will have a message stating "No Suspicious ROI(s) Detected". In the event of any type of failure in the workflow, a human-readable error message representing the type of failure will be logged in the Logs interface.
DeepTek CXR Analyzer does not make treatment recommendations or provide a diagnosis. Radiologists should review images annotated by DeepTek CXR Analyzer concurrently with original, unannotated images before making the final decision on a case. DeepTek CXR Analyzer is an adjunct tool and does not replace the role of the radiologists. The CAD-generated output should not be used as the primary interpretation by radiologists.
DeepTek CXR Analyzer has been trained using a large and diverse dataset of more than 100,000 chest X-ray images sourced from 30 distinct sites from India, including medical imaging centers, data partners, and medical hospitals, and over 15 different modality manufacturers. The inclusion of such a diverse range of data ensures that the performance of the DeepTek CXR Analyzer generalizes to a wide variety of confounders.
DeepTek CXR Analyzer is not designed to detect conditions other than those classified under the following categories: Lung, Cardiac, Pleura, and Hardware. Radiologists should review original images for all suspected ROIs.
Here's a breakdown of the acceptance criteria and study details for the DeepTek CXR Analyzer v1.0, based on the provided FDA 510(k) submission document:
Acceptance Criteria and Reported Device Performance
The core acceptance criteria are based on the performance metrics of the standalone assessment and the clinical performance assessment. The document states that the device's performance was evaluated by measuring sensitivity, specificity, AUROC (Area Under the Receiver Operating Characteristic curve) for detection, and wAFROC-FOM (weighted Alternative Free-Response Receiver Operating Characteristic Figure of Merit) for localization. For the clinical study, the primary objective was to demonstrate that the wAFROC-FOM for aided readings was superior to unaided readings.
Table 1: Acceptance Criteria (Implied) and Reported Device Performance (Standalone)
| Metric (Image-Level Detection) | Target (Implied Acceptance) | Reported Performance [95% CI] |
|---|---|---|
| Sensitivity | (High) | |
| Lungs | 0.903 [0.887-0.914] | |
| Pleura | 0.924 [0.902-0.932] | |
| Cardiac | 0.924 [0.890-0.952] | |
| Hardware | 0.947 [0.936-0.955] | |
| Aggregate | 0.926 [0.917-0.933] | |
| Specificity | (High) | |
| Lungs | 0.937 [0.927-0.948] | |
| Pleura | 0.897 [0.879-0.911] | |
| Cardiac | 0.930 [0.925-0.941] | |
| Hardware | 0.947 [0.939-0.954] | |
| Aggregate | 0.933 [0.925-0.938] | |
| AUROC | (High) | |
| Lungs | 0.971 [0.968-0.976] | |
| Pleura | 0.964 [0.954-0.970] | |
| Cardiac | 0.978 [0.968-0.985] | |
| Hardware | 0.980 [0.976-0.983] | |
| Aggregate | 0.974 [0.970-0.977] |
Table 2: Acceptance Criteria (Implied) and Reported Device Performance (Standalone Localization)
| Metric (ROI-Level Localization) | Target (Implied Acceptance) | Reported Performance [95% CI] |
|---|---|---|
| wAFROC-FOM | (High) | |
| Lungs | 0.913 [0.904-0.924] | |
| Pleura | 0.884 [0.866-0.902] | |
| Cardiac | 0.952 [0.941-0.966] | |
| Hardware | 0.954 [0.948-0.963] | |
| Aggregate | 0.920 [0.908-0.926] |
Table 3: Acceptance Criteria (Clinical Study Null/Alternate Hypothesis) and Reported Device Performance (Clinical Study)
| Metric (Clinical wAFROC-FOM) | Null Hypothesis (H0) | Alternate Hypothesis (H1) | Reported Performance [95% CI] |
|---|---|---|---|
| wAFROC-FOM aided | 0.893 [0.871-0.914] | ||
| wAFROC-FOM unaided | 0.821 [0.791-0.852] | ||
| Difference (Aided - Unaided) | ≤ 0 (No improvement or worse) | > 0 (Superiority of aided) | 0.072 (p < 0.0001) |
The reported performance clearly shows that wAFROC-FOM for aided readings (0.893) was significantly higher (p<0.0001) than for unaided readings (0.821), thereby rejecting the null hypothesis and supporting the alternate hypothesis that the performance of readers aided by DeepTek CXR Analyzer is superior.
Study Details:
-
Sample size used for the test set and the data provenance:
-
Standalone Performance Assessment:
- Sample Size: 3,000 scans
- 2,000 scans from the NIH Chest X-ray Database
- 1,000 scans from the Segmed Insight Platform
- Data Provenance:
- NIH Chest X-ray Database: Country of origin not specified, but typically US-based.
- Segmed Insight Platform: 13 different sites across various regions of the United States.
- Type: Retrospective data. The datasets were explicitly stated as not used for DeepTek CXR Analyzer model training and development.
- Sample Size: 3,000 scans
-
Clinical Performance Assessment (MRMC Study):
- Sample Size: 300 frontal chest radiographs.
- Data Provenance: Obtained from 13 U.S. hospitals.
- Type: Retrospective.
-
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Standalone Performance Assessment: 3 U.S. board-certified radiologists with 9, 11, and 25 years of experience, respectively.
- Clinical Performance Assessment: The document describes the ground truth for the test set in the standalone assessment. For the clinical study, the readers' performance was compared against the established ground truth from the standalone assessment, but the ground truth establishment itself is linked to the radiologist panel from the standalone study.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Standalone Performance Assessment: The ground truth for the presence or absence of ROI for each category was defined as the majority opinion of 2 out of the 3 radiologists (2/3 consensus).
- If the majority opinion stated suspicious ROI was present, the union of the area encompassed by the bounding boxes made by all annotators identifying suspicious ROI for that particular category was taken as ground truth ROI.
- If the majority opinion stated suspicious ROI was absent, ROI was not demarcated on the image.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a MRMC comparative effectiveness study was done.
- Effect Size:
- The aggregated wAFROC-FOM for aided readings was 0.893 [0.871-0.914].
- The aggregated wAFROC-FOM for unaided readings was 0.821 [0.791-0.852].
- The improvement (effect size) in aggregated wAFROC-FOM when readers were aided by the device was 0.072 (0.893 - 0.821). This improvement was statistically significant (p < 0.0001).
- Specifically, 24/24 (100%) readers showed an improvement in wAFROC-FOM for localizing suspicious ROIs across all categories when they were aided by DeepTek CXR Analyzer.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done.
- This assessment evaluated the DeepTek CXR Analyzer's performance in detection (sensitivity, specificity, AUROC) and localization (wAFROC-FOM) independently, in the absence of human interaction.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus: The ground truth for the test sets (both standalone and implicitly for the clinical study) was established by a panel of 3 U.S. board-certified radiologists through a majority opinion (2 out of 3).
-
The sample size for the training set:
- The device was trained using a large and diverse dataset of more than 100,000 chest X-ray images.
-
How the ground truth for the training set was established:
- The document states the training data was sourced from "30 distinct sites from India, including medical imaging centers, data partners, and medical hospitals." While it highlights the diversity, it does not explicitly detail the process for establishing ground truth for the training set (e.g., number of readers, their qualifications, adjudication method). This information is typically provided separately in the technical documentation but is not present in the provided 510(k) summary excerpt for the training set's ground truth. However, it implicitly suggests that these were datasets with existing interpretations or were curated by experts given their source from medical imaging centers and hospitals.
Ask a specific question about this device
Page 1 of 1