Search Results
Found 3 results
510(k) Data Aggregation
(104 days)
qER-CTA (v1.0)
Ask a specific question about this device
(98 days)
qER-Quant
The qER-Quant device is intended for automatic labeling, visualization of segmentable brain structures from a set of Non-Contrast head CT (NCCT) images. The software is intended to automate the current manual process of identifying, labeling and quantifying the volume of segmentable brain structures identified on NCCT images. qER-Quant provides volumes from NCCT images acquired at a single time point and provides a table with comparative analysis for two or more images that were acquired on the same image acquisition protocol for the same individual at multiple time points.
The qER-Quant software is indicated for use in the following structures: Intracranial Hyperdensities, Lateral Ventricles and Midline Shift.
qER-Quant is a standalone software device that processes non-contrast head CT scans to outline and quantify the structures described in the intended use. The qER-Quant software interacts with the user's picture archiving and communication system (PACS) to receive scans and returns the results to the same destination.
The analysis module of the qER-Quant software contains of a set of pre-trained convolutional neural networks (CNNs), that form the core processing component shown in Figure 1. This core processing component is coupled with a pre-processing module to prepare input digital imaging and communications in medicine (DICOMs) for processing by the CNNs and a post-processing module to convert the output into visual and tabular output for users.
Here's a breakdown of the acceptance criteria and study details for the qER-Quant device, based on the provided text:
qER-Quant Device Performance Study Details
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria were defined based on the accuracy of the qER-Quant system when compared against manually labeled ground truth. The reported device performance met these pre-set criteria.
Metric | Acceptance Criteria (Implied / Context) | Reported Device Performance (Mean ± SD / Mean (95% CI) / Median (10th-90th Percentile)) |
---|---|---|
Intracranial Hyperdensity | ||
Absolute Error (Volume) | Exceeds preset acceptance criteria | 6.56 (7.33) ml (Mean ± SD) |
3.98 (0.52 - 17.35) ml (Median (10th - 90th percentile)) | ||
Dice Score (Segmentation Accuracy) | Exceeds preset acceptance criteria | 0.75 (0.72 - 0.78) (Mean (95% CI)) |
Midline Shift | ||
Absolute Error (Shift) | Exceeds preset acceptance criteria | 1.37 (1.23) mm (Mean ± SD) |
1.15 (0.23 - 2.59) mm (Median (10th - 90th percentile)) | ||
Dice Score (Segmentation Accuracy) | Not Applicable | Not applicable |
Left Lateral Ventricle | ||
Absolute Error (Volume) | Exceeds preset acceptance criteria | 2.09 (1.88) ml (Mean ± SD) |
1.60 (0.29 - 4.24) ml (Median (10th - 90th percentile)) | ||
Dice Score (Segmentation Accuracy) | Exceeds preset acceptance criteria | 0.79 (0.78 - 0.81) (Mean (95% CI)) |
Right Lateral Ventricle | ||
Absolute Error (Volume) | Exceeds preset acceptance criteria | 2.18 (1.72) ml (Mean ± SD) |
1.88 (0.40 - 4.53) ml (Median (10th - 90th percentile)) | ||
Dice Score (Segmentation Accuracy) | Exceeds preset acceptance criteria | 0.75 (0.73 - 0.77) (Mean (95% CI)) |
2. Sample Size and Data Provenance
- Test Set Sample Sizes:
- Intracranial Hyperdensity: 183 scans
- Midline Shift: 188 scans
- Left Lateral Ventricle: 210 scans
- Right Lateral Ventricle: 210 scans
- Reproducibility testing was done on 20% of these CT scans.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It uses "a set of head CT scans."
3. Number of Experts and Qualifications for Ground Truth Establishment
- Number of Experts: The document states "experts" (plural) were used but does not specify the exact number.
- Qualifications of Experts: Not specified beyond being "experts" in the context of manually labeling CT scans.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It only mentions that the ground truth was established by "manually labeled by experts."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study was not reported. The performance testing was a "standalone" evaluation of the device's accuracy against expert-generated ground truth.
6. Standalone Performance (Algorithm Only)
- Yes, a standalone performance study was conducted. The document states: "Qure.ai performed standalone consisted of a set of head CT scans with the outlines of the target structures manually labeled by experts." The results detailed in Table 2 are of this standalone performance.
7. Type of Ground Truth Used
- The ground truth used was expert consensus / manual labeling. The document clearly states: "manually labeled by experts."
8. Sample Size for the Training Set
- The document does not provide the sample size for the training set. It only describes the architecture of the analysis module as "a set of pre-trained convolutional neural networks (CNNs)."
9. How Ground Truth for the Training Set Was Established
- The document does not explicitly state how the ground truth for the training set was established. It describes the CNNs as "pre-trained," which implies a training phase using labeled data, but the method of ground truth establishment for that specific data is not detailed.
Ask a specific question about this device
(72 days)
qER
qER is a radiological computer aided triage and notification software in the analysis of non-contrast head CT images.
The device is intended to assist hospital networks and trained medical specialists in workflow triage by flagging the following suspected positive findings of pathologies in head CT images: intracranial hemorrhage, mass effect, midline shift and cranial fracture.
qER uses an artificial intelligence algorithm to analyze images on a standalone cloud-based application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include non-diagnostic preview images that are meant for informational purposes only. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of the device are intended to be used in conjunction information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Qure.ai Head CT scan interpretation software, qER, is a deep-learning-based software device that analyses head CT scans for signs of intracranial hemorrhage, midline shift, mass effect or cranial fractures in order to prioritize them for clinical review. The standalone software device consists of an on-premise module and a cloud module. qER accepts non-contrast adult head CT scan DICOM files as input and provides a priority flag indicating critical scans. Additionally, the software has the preview of critical scans to the medical specialist.
Here's a breakdown of the acceptance criteria and the study proving the qER device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The core purpose of the qER device is workflow triage by identifying suspected positive findings of pathologies in head CT images. The performance data presented focuses on the device's ability to accurately detect these pathologies in a standalone setting.
1. Table of Acceptance Criteria and Reported Device Performance
While the document doesn't explicitly state "acceptance criteria" as numerical thresholds beyond "exceeded the predefined success criteria, as well as the required performance criteria for triage and notification software as per the special controls for QAS," the reported sensitivities and specificities for each pathology effectively serve as the demonstrated "acceptance" level the device achieved.
Abnormality | Acceptance Criteria (Implied Success) | Reported Device Performance (Sensitivity [95% CI]) | Reported Device Performance (Specificity [95% CI]) | Reported Device Performance (AUC [95% CI]) |
---|---|---|---|---|
Intracranial Hemorrhage | High sensitivity & specificity for triage | 96.98 (95.32 - 98.17) | 93.92 (91.87 - 95.58) | 98.53 (98.00 - 99.15) |
Cranial Fracture | High sensitivity & specificity for triage | 96.77 (93.74 - 98.60) | 92.72 (91.00 - 94.21) | 97.66 (96.88 - 98.57) |
Mass Effect | High sensitivity & specificity for triage | 96.39 (94.28 - 97.88) | 96.00 (94.45 - 97.21) | 99.09 (98.73 - 99.52) |
Midline Shift | High sensitivity & specificity for triage | 97.34 (95.30 - 98.67) | 95.36 (93.79 - 96.64) | 99.09 (98.74 - 99.51) |
Any of the 4 target abnormalities | High sensitivity & specificity for triage | 98.53 (97.45 - 99.24) | 91.22 (88.39 - 93.55) | NA |
Additionally, a key performance metric for a triage device is the time to notification:
Parameter | Acceptance Criteria (Implied improvement over std. care) | Reported Device Performance (Mean [95% CI]) | Reported Device Performance (Median [95% CI]) |
---|---|---|---|
Time to open exam in the standard of care | Benchmark for comparison | 65.54 (59.14 - 71.76) min | 60.01 (54.57 - 77.63) min |
Time-to-notification with qER | Significantly lower than standard of care | 2.11 (1.45 - 2.61) min | 1.21 (1.12 - 1.25) min |
Study Details
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 1320 head CT scans.
- Data Provenance: Retrospective, multicenter study. Data originated from multiple locations within the United States.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 board-certified radiologists.
- Qualifications: The document explicitly states "board-certified radiologists." No further details on years of experience are provided.
4. Adjudication Method for the Test Set
- The text states that the ground truth was established by "3 board-certified radiologists reading the scans." It does not explicitly mention an adjudication method (e.g., 2+1, 3+1 consensus). It is implied that their readings defined the ground truth, but the process of resolving discrepancies among the three readers is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs without AI assistance
- The provided text does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was directly compared. The study primarily focuses on the standalone performance of the qER algorithm and its ability to reduce the "time to notification" compared to standard of care "time to open." While the "time-to-notification" analysis suggests a significant workflow improvement when using qER for triage (2.11 mins vs. 65.54 mins), this is not a direct measure of human reader diagnostic accuracy improvement with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance study was done. The "Performance Data" section explicitly states, "A retrospective, multicenter, blinded clinical study was conducted to test the accuracy of qER at triaging head CT scans... Sensitivity and specificity exceeded the predefined success criteria... demonstrating the ability of the qER device to effectively triage studies containing one of these conditions." The results in Table 2 are for the qER algorithm's accuracy independently.
7. The Type of Ground Truth Used
- Expert Consensus: The ground truth for the pathologies (Intracranial hemorrhage, cranial fractures, mass effect, midline shift, and absence of these abnormalities) was established by "3 board-certified radiologists reading the scans." This indicates an expert consensus approach to defining the ground truth.
8. The Sample Size for the Training Set
- The document does not specify the sample size used for the training set. It mentions that the qER software uses "a pre-trained artificial intelligence algorithm" and "a pre-trained classification convolutional neural network (CNN) that has been trained to detect a specific abnormality from head CT scan images." However, the size of the dataset used for this training is not disclosed in the provided text.
9. How the Ground Truth for the Training Set was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only states that the CNN was "pre-trained" on medical images to detect specific abnormalities. It is common practice for such training to also rely on expert annotations, but this is not detailed for the training set in this document.
Ask a specific question about this device
Page 1 of 1