Search Results
Found 1 results
510(k) Data Aggregation
(151 days)
LungQ v3.0.0
The Thirona LungQ software provides reproducible CT values for pulmonary tissue which is essential for providing quantitative support for diagnosis and follow up examination. The LungQ software can be used to support physician in the diagnosis and documentation of pulmonary tissues images (e.g. abnormalities) from CT thoracic datasets. Three-D segmentation and isolation of sub-compartments, volumetric analysis, density evaluation, and reporting tools are provided.
The LungQ software is designed to aid in the interpretation of Computed Tomography (CT) scans of the thorax that may contain pulmonary abnormalities. LungQ is stand-alone command-line software which must be run from a command-line interpreter and does not have a graphical user interface.
This document describes the acceptance criteria and the study conducted to prove that the device, LungQ v3.0.0, meets these criteria, as presented in the FDA 510(k) summary.
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria | Reported Device Performance (LungQ v3.0.0 vs LungQ v1.1.0) | Reported Device Performance (LungQ v3.0.0 vs Human Experts) |
---|---|---|---|
Lung and Lobar Volume | Difference ≤ 10% | Difference less than threshold value | N/A (compared to experts for (sub)segments) |
Density Measurements: | |||
* LAA-950HU | Agreement limits -1% to 1% | Difference less than threshold value | N/A (compared to experts for (sub)segments) |
* LAA-910HU | Agreement limits -10% and 10% | Difference less than threshold value | N/A (compared to experts for (sub)segments) |
* 15th Percentile Density (PD15) | Agreement limits -10 HU to 10 HU | Difference less than threshold value | N/A (compared to experts for (sub)segments) |
Fissure Completeness | Az value ≥ 0.95 (ROC analysis) | Az value = 0.97 | N/A |
(Sub)segmental Volumes | Tolerable variability (absolute): 150mL | Mean difference (SD) less than threshold value | Mean difference (SD) less than threshold value |
Tolerable variability (relative): 5% | Mean difference (SD) less than threshold value | Mean difference (SD) less than threshold value | |
(Sub)segmental Density Scores | Tolerable variability (absolute): 150mL | Mean difference (SD) less than threshold value | Mean difference (SD) less than threshold value |
Tolerable variability (relative): 5% | Mean difference (SD) less than threshold value | Mean difference (SD) less than threshold value |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated in the provided text. The document mentions "scans were taken with a wide variety of scanner brands and models," and lists various scanner types but does not specify the number of cases.
- Data Provenance:
- Country of Origin: Not specified in the provided text.
- Retrospective or Prospective: Not specified in the provided text, but the nature of comparing to a predicate device and expert corrected segmentations suggests it was likely retrospective given the need for ground truth.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated beyond "human experts."
- Qualifications of Experts: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The document mentions "segmentation which were corrected by human experts," implying that human experts refined or established the ground truth for (sub)segmental volumes and density scores. However, the process for this correction (e.g., consensus among multiple experts or a single expert's adjudication) is not detailed. Comparisons between the two devices were done using Bland-Altman plots and ROC analysis.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The study was a "head-to-head performance testing" between the subject device (LungQ v3.0.0) and the predicate device (LungQ v1.1.0), and a comparison of LungQ v3.0.0 to human experts for (sub)segmental measurements. There is no mention of human readers evaluating cases with and without AI assistance to determine an effect size of AI on human performance.
- Effect Size of Human Readers with AI vs. without AI assistance: Not applicable, as an MRMC comparative effectiveness study was not performed.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Performance
- Was a standalone study done? Yes. The entire head-to-head performance study comparing LungQ v3.0.0 to LungQ v1.1.0, and the comparison of LungQ v3.0.0 outputs for (sub)segmental analysis against human expert corrections, represents standalone performance evaluation of the algorithm. The device itself is described as "stand-alone command-line software which must be run from a command-line interpreter and does not have a graphical user interface," further reinforcing its standalone nature.
7. Type of Ground Truth Used
- Ground Truth Type:
- Predicate Device Output: For lung and lobar volumes, LAA-950HU, LAA-910HU, PD15, and fissure completeness, the outputs of the predicate device (LungQ v1.1.0) served as a comparative reference.
- Expert Consensus/Correction: For (sub)segmental volumes and density scores, the ground truth was established by "segmentation which were corrected by human experts." This implies expert-adjudicated or expert-derived ground truth.
8. Sample Size for the Training Set
- The document does not provide information regarding the sample size used for the training set. The descriptions focus solely on the performance evaluation of the device in comparison to a predicate and human experts.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established, as details about the training set itself are absent.
Ask a specific question about this device
Page 1 of 1