Search Results
Found 1 results
510(k) Data Aggregation
(427 days)
THINQ is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of MR images. Volumetric measurements may be compared to reference percentile data.
THINQ™ is a software-only, non-interactive, medical device for quantitative imaging, accepting as input 3D T1-weighted MRI scan data of the human head. THINQ™ produces as output a quantitative neuromorphometry report in PDF format. The report contains morphometric (volume) measurements and visualizations of various structures in the brain, and compares these measures to age and gender-matched reference percentile data. The report includes images of the brain with color-coded segmentations, as well as plots showing how measurements compare to reference data. Additionally, in order to visually confirm the accuracy of the results, three segmentation overlays are created in DICOM-JPEG format; one in each anatomical plane: sagittal, coronal and axial.
The THINQ™ processing pipeline performs an atlas-based segmentation of brain structures followed by measurement of those structures and a comparison to a reference dataset. The pipeline includes automated QA checks on the input DICOM 3D T1 MRI series to ensure adherence to imaging sequence requirements, checks on the data elements generated during the processing pipeline, and usage of a classifier to filter potentially incorrect reports due to corrupted image input.
THINQ™ is packaged as a container, for deployment and operation in a high-performance computing environment within a clinical workflow.
Here's a summary of the acceptance criteria and study details for the THINQ device, based on the provided text:
1. Acceptance Criteria and Device Performance
The provided document does not explicitly state acceptance criteria in a quantitative format (e.g., "Dice similarity coefficient must be >= 0.85"). Instead, it presents the reported device performance and implies that these results meet an acceptable standard, likely derived from a "literature review of neuroimaging publications."
Table of Reported Device Performance
Structure | Accuracy Metric | Mean (StDev) |
---|---|---|
Whole Brain | Dice | 0.94 (0.01) |
AVE (cm³) | 327.00 (111.48) | |
RVE | 0.30 (0.13) | |
Total Gray Matter | Dice | 0.82 (0.02) |
AVE (cm³) | 174.63 (46.42) | |
RVE | 0.24 (0.05) | |
Total White Matter | Dice | 0.87 (0.04) |
AVE (cm³) | 65.67 (37.38) | |
RVE | 0.18 (0.12) | |
Left Cortical Gray Matter | Dice | 0.92 (0.06) |
AVE (cm³) | 10.59 (6.51) | |
RVE | 0.05 (0.03) | |
Right Cortical Gray Matter | Dice | 0.92 (0.07) |
AVE (cm³) | 10.43 (6.67) | |
RVE | 0.05 (0.03) | |
Left Frontal Lobe | Dice | 0.90 (0.06) |
AVE (cm³) | 5.44 (3.83) | |
RVE | 0.07 (0.05) | |
Right Frontal Lobe | Dice | 0.90 (0.06) |
AVE (cm³) | 5.07 (3.86) | |
RVE | 0.06 (0.05) | |
Left Parietal Lobe | Dice | 0.88 (0.08) |
AVE (cm³) | 4.06 (3.04) | |
RVE | 0.08 (0.06) | |
Right Parietal Lobe | Dice | 0.88 (0.08) |
AVE (cm³) | 3.85 (2.86) | |
RVE | 0.07 (0.06) | |
Left Occipital Lobe | Dice | 0.82 (0.07) |
AVE (cm³) | 1.57 (1.22) | |
RVE | 0.07 (0.05) | |
Right Occipital Lobe | Dice | 0.82 (0.08) |
AVE (cm³) | 1.92 (1.97) | |
RVE | 0.08 (0.09) | |
Left Temporal Lobe | Dice | 0.89 (0.06) |
AVE (cm³) | 2.08 (1.89) | |
RVE | 0.04 (0.04) | |
Right Temporal Lobe | Dice | 0.89 (0.06) |
AVE (cm³) | 2.11 (1.81) | |
RVE | 0.04 (0.04) | |
Left Cerebral White Matter | Dice | 0.86 (0.04) |
AVE (cm³) | 32.60 (18.80) | |
RVE | 0.18 (0.12) | |
Right Cerebral White Matter | Dice | 0.86 (0.04) |
AVE (cm³) | 33.07 (18.88) | |
RVE | 0.18 (0.12) | |
Left Lateral Ventricle | Dice | 0.86 (0.07) |
AVE (cm³) | 2.32 (1.69) | |
RVE | 0.17 (0.15) | |
Right Lateral Ventricle | Dice | 0.85 (0.07) |
AVE (cm³) | 2.19 (1.59) | |
RVE | 0.18 (0.14) | |
Left Hippocampus | Dice | 0.78 (0.03) |
AVE (cm³) | 0.45 (0.29) | |
RVE | 0.14 (0.09) | |
Right Hippocampus | Dice | 0.79 (0.03) |
AVE (cm³) | 0.39 (0.28) | |
RVE | 0.12 (0.10) | |
Left Amygdala | Dice | 0.66 (0.05) |
AVE (cm³) | 0.60 (0.17) | |
RVE | 0.68 (0.24) | |
Right Amygdala | Dice | 0.64 (0.06) |
AVE (cm³) | 0.69 (0.19) | |
RVE | 0.74 (0.27) | |
Left Caudate | Dice | 0.78 (0.07) |
AVE (cm³) | 0.50 (0.35) | |
RVE | 0.17 (0.14) | |
Right Caudate | Dice | 0.78 (0.07) |
AVE (cm³) | 0.53 (0.34) | |
RVE | 0.18 (0.13) | |
Left Putamen | Dice | 0.82 (0.04) |
AVE (cm³) | 0.83 (0.35) | |
RVE | 0.20 (0.10) | |
Right Putamen | Dice | 0.82 (0.03) |
AVE (cm³) | 0.89 (0.35) | |
RVE | 0.21 (0.08) | |
Left Thalamus | Dice | 0.82 (0.03) |
AVE (cm³) | 1.51 (0.53) | |
RVE | 0.19 (0.05) | |
Right Thalamus | Dice | 0.83 (0.03) |
AVE (cm³) | 1.38 (0.45) | |
RVE | 0.18 (0.04) | |
Left Cerebellum | Dice | 0.91 (0.02) |
AVE (cm³) | 2.10 (1.43) | |
Right Cerebellum | Dice | 0.92 (0.02) |
AVE (cm³) | 2.07 (1.49) | |
RVE | 0.03 (0.02) | |
Intracranial Volume ICV | APD | 3.42 (2.05) |
AVE (cm³) | 50.18 (31.92) | |
RVE | 0.03 (0.02) |
Reproducibility Results:
Structure | Reproducibility APD Mean (StDev) |
---|---|
Whole Brain | 0.34 (0.29) |
Total Gray Matter | 0.83 (0.80) |
Total White Matter | 1.04 (1.12) |
Left Cortical Gray Matter | 1.08 (0.89) |
Right Cortical Gray Matter | 1.04 (0.88) |
Left Frontal Lobe | 1.31 (1.30) |
Right Frontal Lobe | 1.57 (2.41) |
Left Parietal Lobe | 1.50 (1.31) |
Right Parietal Lobe | 1.67 (2.73) |
Left Occipital Lobe | 1.49 (1.16) |
Right Occipital Lobe | 2.00 (1.41) |
Left Temporal Lobe | 1.24 (1.37) |
Right Temporal Lobe | 1.39 (1.15) |
Left Cerebral White Matter | 1.15 (1.09) |
Right Cerebral White Matter | 1.10 (1.20) |
Left Lateral Ventricle | 1.44 (1.21) |
Right Lateral Ventricle | 1.55 (1.12) |
Left Hippocampus | 1.56 (1.76) |
Right Hippocampus | 1.49 (1.57) |
Left Amygdala | 1.25 (1.23) |
Right Amygdala | 1.72 (1.36) |
Left Caudate | 1.14 (1.22) |
Right Caudate | 1.24 (1.10) |
Left Putamen | 1.41 (1.15) |
Right Putamen | 1.29 (0.90) |
Left Thalamus | 0.86 (0.59) |
Right Thalamus | 0.75 (0.63) |
Left Cerebellum | 0.62 (0.58) |
Right Cerebellum | 0.60 (0.54) |
Intracranial Volume ICV | 0.30 (0.29) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The "validation dataset was composed of 645 unique MR images."
- Data Provenance: The document states that the dataset included "a wide range of patient characteristics (e.g. age, gender, disease case) and image acquisition varieties (e.g. scanner manufacturer, image acquisition protocols, data noise and artifacts)." However, specific details such as the country of origin or whether the data was retrospective or prospective are not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: The document states that performance testing involved comparisons "to expert-labeled brain images" but does not specify the number of experts used.
- Qualifications of Experts: The document refers to them as "expert-labeled," but does not provide specific qualifications (e.g., radiologist with X years of experience). It does refer to "gold-standard computer-aided expert manual segmentation" in the conclusion, implying a high standard of expertise.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method for establishing ground truth, beyond stating it was "expert-labeled."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly mentioned or performed to assess how much human readers improve with AI vs without AI assistance. The study focuses on the standalone performance of the THINQ device (segmentation accuracy and reproducibility) against established ground truth.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone study was done. The performance data presented (Dice, AVE, RVE, APD for accuracy and reproducibility) directly represents the algorithm's performance in automatically segmenting brain structures and quantifying volumes without human intervention during the segmentation process. The device explicitly states it is "a software-only, non-interactive, medical device for quantitative imaging."
7. The Type of Ground Truth Used
- Expert Consensus / Manual Segmentation: The segmentation accuracy was evaluated by comparing THINQ's output to "expert-labeled brain images" and "gold-standard computer-aided expert manual segmentation."
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It only mentions the "validation dataset" of 645 unique MR images, which is typically distinct from the training set.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only mentions the process for the validation/test set, which involved "expert-labeled brain images."
Ask a specific question about this device
Page 1 of 1