Search Results
Found 1 results
510(k) Data Aggregation
(206 days)
qXR-Detect is a computer-assisted detection (CADe) software device that analyzes chest radiographs and highlights suspicious regions of interest (ROIs). The device is intended to identify, highlight, and categorize suspicious regions of interest (ROI). Any suspicious ROI is highlighted by qXR-Detect and categorized into one of six categories (lung, pleura, bone, Mediastinum & Hila & Heart, hardware and other). The device is intended for use as a concurrent reading aid. qXR-Detect is indicated for adults only.
qXR-Detect is a computer-assisted detection (CADe) software device that analyzes chest radiographs and highlights suspicious regions of interest (ROIs). The device is intended to identify, highlight and categorize suspicious regions of interest (ROI). Any suspicious ROI is highlighted by qXR-Detect and categorized into one of six categories (lung, pleura, bone, Mediastinum &Hila & Heart, hardware and other). qXR-Detect is indicated for adults only. qXR-Detect is an adjunct tool and is not intended to replace a clinician's review of the radiograph or his/her clinical judgment. The users must not use the qXR-Detect generated output as the primary interpretation.
The qXR-Detect device is intended to generate a secondary digital radiographic image that facilitates the confirmation of the presence of suspicious region of interest within the categories on a chest X-Ray.
The software works with DICOM chest X-ray images and can be deployed on a secure cloud server. De-identified chest X-rays are sent to qXR-Detect via HTTPS from the client's software. Results are fetched by the client's software and then forwarded to their PACS or any other systems including but not limited to specified radiology software database once the processing is complete or to the console of the digital radiographic processing system.
Here's a breakdown of the acceptance criteria and the study details for the qXR-Detect device, based on the provided FDA clearance letter:
Acceptance Criteria and Reported Device Performance
Device Name: qXR-Detect
Type: Computer-assisted detection (CADe) software device
| Category | Acceptance Criteria (Standalone Performance) - AUC (95% CI) | Reported Device Performance (Standalone Performance) - AUC (95% CI) |
|---|---|---|
| Lung | Not explicitly stated as a minimum threshold in the provided text, but implied as satisfactory performance. | 0.893 (0.879-0.907) |
| Pleura | Not explicitly stated | 0.95 (0.94-0.96) |
| Mediastinum/Hila | Not explicitly stated | 0.891 (0.875-0.907) |
| Bone | Not explicitly stated | 0.879 (0.854-0.905) |
| Hardware | Not explicitly stated | 0.958 (0.95-0.966) |
| Other | Not explicitly stated | 0.915 (0.895-0.935) |
| Category | Acceptance Criteria (Standalone Performance) - wAFROC (95% CI) | Reported Device Performance (Standalone Performance) - wAFROC (95% CI) |
| Lung | Implied that wAFROC should be above 0.8 for most categories | 0.831 (0.816-0.846) |
| Pleura | Implied that wAFROC should be above 0.8 for most categories | 0.89 (0.875-0.905) |
| Mediastinum & Hila & Heart | Implied that wAFROC should be above 0.8 for most categories | 0.867 (0.85-0.883) |
| Bone | Implied that wAFROC should be above 0.8 for most categories | 0.821 (0.789-0.852) |
| Hardware | Implied that wAFROC should be above 0.8 for most categories | 0.771 (0.759-0.782) |
| Others | Implied that wAFROC should be above 0.8 for most categories | 0.871 (0.845-0.897) |
| Aggregate | Implied that wAFROC should be above 0.8 for most categories | 0.839 (0.824-0.854) |
| Category | Acceptance Criteria (Clinical Performance) - wAFROC Improvement | Reported Device Performance (Clinical Performance) - wAFROC Improvement |
| Overall wAFROC | Not explicitly stated as a minimum threshold, but statistical significance (P value < 0.001) and improvement was targeted. | Improved from 0.6894 (unaided) to 0.7505 (aided), an improvement of 0.0611 (P < 0.001). |
| All Categories | Improvement expected | All categories showed improvement. |
| False Positives per Image | Reduction expected | Reduced from 0.4182 (unaided) to 0.3300 (aided). |
| Category | Acceptance Criteria (Clinical Performance) - AUC Improvement | Reported Device Performance (Clinical Performance) - AUC Improvement |
| Overall AUC | Not explicitly stated as a minimum threshold, but statistical significance and improvement was targeted. | Improved from 0.8466 (unaided) to 0.8720 (aided). |
| All Categories | Improvement expected | All categories showed improvement. |
Study Details for Device Performance
1. Acceptance Criteria and Reported Device Performance (See table above)
2. Sample size used for the test set and the data provenance:
- Standalone Performance Study Test Set: The exact sample size for the standalone test set is not explicitly given, but the text states "Most of the scans for the study were obtained from across the US spanning 40 states and 5 regions in the US."
- Clinical Performance Study Test Set: 301 samples were used.
- Data Provenance (Clinical Performance Test Set): Not explicitly stated, but given the training data provenance and the testing context, it is likely that the clinical test set also included data from across the US (40 states and 5 regions). The data was retrospective, as it was described as a "multireader multicase study conducted on 301 samples."
- Data Characteristics: Well-balanced in terms of gender (approx. 50-50 male-to-female distribution). Age distribution from 22 to over 85 years.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Standalone Performance Study: 3 ground truthers annotated the chest X-ray scans. Their specific qualifications are not detailed beyond "ground truthers," but it's implied they are experts in medical image interpretation for the purpose of establishing ground truth for suspicious ROIs.
- Clinical Performance Study: The ground truth for the clinical study was established by the same process as the standalone study (3 ground truthers), though it's not explicitly re-stated in the clinical study section. The "readers" in the clinical study (who read images with and without AI assistance) were 18 professionals including radiologists, ER physicians, and family medicine practitioners. Their years of experience are not specified.
4. Adjudication method for the test set:
- Standalone Performance Study (Ground Truth Establishment): The method described implies a consensus-based approach, though not a specific numerical adjudication. "If there is at least one ground truth boundary for a particular category, the scan is considered to be positive for that category." This suggests that even a single expert identifying a boundary contributed to the ground truth, rather than requiring a majority consensus in all cases for each boundary. However, the overarching "ground truth established by 3 ground truthers" suggests collective expert input. A more precise adjudication method (e.g., 2-out-of-3 majority) is not explicitly stated for individual boundary decisions.
- Clinical Performance Study: The ground truth for judging reader performance against was the same as established for the standalone study. The comparison of reader performance (unaided vs. aided) implicitly uses this established ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC comparative effectiveness study was done.
- Effect Size of Improvement:
- Overall wAFROC: Improved by 0.0611 (from 0.6894 unaided to 0.7505 aided). This improvement was statistically significant (P < 0.001).
- False Positives per image: Reduced from 0.4182 (unaided) to 0.3300 (aided).
- Overall AUC: Improved from 0.8466 (unaided) to 0.8720 (aided).
- Individual Reader Improvement: 17 out of 18 readers showed improvement in wAFROC-FOM across all categories. All 18 readers improved in detecting and localizing suspicious lung ROIs.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The results are presented in "Table 2 Standalone Performance Testing Results for qXR-Detect" (AUC metrics) and "Table 3 Standalone Performance Testing Results for localization - qXR-Detect" (wAFROC metrics).
7. The type of ground truth used:
- The ground truth was established by expert consensus/annotation. "3 ground truthers annotated the chest X-ray scans for the presence of suspicious ROI categories."
8. The sample size for the training set:
- The underlying algorithm was trained on a large dataset of ~2.5 million Chest X-Ray scans.
9. How the ground truth for the training set was established:
- The document states that the training data "consisted of various abnormal regions of interest." While it doesn't explicitly detail the methodology for establishing ground truth for the training set, given the detailed ground truth process for the test set, it's highly probable that the training set ground truth was also established through expert radiological review and annotation, similar to the process described for the test set.
Ask a specific question about this device
Page 1 of 1