(100 days)
Quantib™ ND is a non-invasive medical imaging processing application that is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures from a set of magnetic resonance (MR) images. The Quantib™ ND output consists of segmentations, visualizations and volumetric measurements of brain structures and white matter hyperintensities. Volumetric measurements may be compared to reference centile data. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the trained medical professional in quantitative reporting. Quantib™ ND is a software application on top of Myrian®.
Quantib™ ND is a post-processing analysis module for Myrian®, which provides 3D image visualization tools that create and display user-defined views and streamlines interpretation and reporting. It is intended for automatic labeling, visualization, and volumetric quantification of identifiable brain structures from magnetic resonance images (a 3D T1-weighted MR image, with an additional T2-weighted FLAIR MR image for white matter hyperintensities (WMH) segmentation). The segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments. Quantib™ ND provides quantitative information on both the absolute and relative volume of the segmented regions. The automatic WMH segmentation is to be reviewed and if necessary. edited by the user before validation of the segmentation, after which volumetric information is accessible. Quantib ND consists of Quantib ND Baseline, which provides analysis of images of one time-point, and Quantib ND Follow-Up, which provides longitudinal analysis of images of two time-points. Quantib ND Follow-Up can only process images that have been processed by Quantib ND Baseline. Quantib ND is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the radiology specialist in quantitative reporting.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state acceptance criteria in the form of pre-defined thresholds for Dice index or absolute difference of relative volumes. However, the performance data is presented against manual segmentations, implying that the acceptance criteria are generally "good agreement" or "sufficient similarity" to manual segmentations, as judged by the provided metrics. For the purpose of this table, I'll assume that the reported values demonstrate that the device met an implicit acceptance standard.
Brain Structure / Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Mean ± Std. Dev.) |
---|---|---|
Brain Tissue Segmentations | ||
Brain Dice Index | "Good agreement" | 0.96 ± 0.01 |
Brain Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 1.7 ± 1.3 |
CSF Dice Index | "Good agreement" | 0.78 ± 0.05 |
CSF Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 1.8 ± 1.3 |
ICV Dice Index | "Good agreement" | 0.98 ± 0.01 |
Hippocampus Segmentations | ||
Hippocampus total Dice Index | "Good agreement" | 0.84 ± 0.03 |
Hippocampus total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 0.03 ± 0.02 |
Hippocampus right Dice Index | "Good agreement" | 0.84 ± 0.03 |
Hippocampus right Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 0.01 ± 0.01 |
Hippocampus left Dice Index | "Good agreement" | 0.84 ± 0.03 |
Hippocampus left Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 0.01 ± 0.01 |
Lobe Segmentations (Dataset C) | ||
Frontal lobe total Dice Index | "Good agreement" | 0.95 ± 0.01 |
Frontal lobe total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 1.95 ± 0.90 |
Occipital lobe total Dice Index | "Good agreement" | 0.88 ± 0.03 |
Occipital lobe total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 0.87 ± 0.75 |
Parietal lobe total Dice Index | "Good agreement" | 0.89 ± 0.03 |
Parietal lobe total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 2.81 ± 1.13 |
Temporal lobe total Dice Index | "Good agreement" | 0.91 ± 0.01 |
Temporal lobe total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 1.33 ± 0.76 |
Cerebellum total Dice Index | "Good agreement" | 0.98 ± 0.01 |
Cerebellum total Absolute Diff. of Rel. Volumes [pp] | "Good agreement" | 0.47 ± 0.20 |
White Matter Hyperintensities (WMH) | ||
WMH Dice Overlap | "Good agreement" | 0.61 ± 0.13 |
WMH Absolute Diff. of Rel. Volumes [pp] | "Good agreement" (for non-CE cases) | 0.2 ± 0.2 |
2. Sample Sizes Used for the Test Set and Data Provenance
- Brain Tissue, CSF, ICV (Dataset A):
- Sample Size: 33 T1w MR images.
- Data Provenance: "carefully selected to include data from multiple vendors and a series of representative scan settings." No specific country of origin or retrospective/prospective status is mentioned, but the description implies a historical or retrospective collection.
- Hippocampus (Dataset B):
- Sample Size: 89 T1w images.
- Data Provenance: Not explicitly detailed beyond being T1w images. Implied retrospective.
- Lobes (Dataset C):
- Sample Size: 13 T1w MR images.
- Data Provenance: Not explicitly detailed. Implied retrospective.
- White Matter Hyperintensities:
- Sample Size: 45 3D T1w images (7 contrast-enhanced), each with corresponding T2w FLAIR images.
- Data Provenance: "represented various scan settings." Implied retrospective.
- Note: The absolute difference of relative volumes for WMH was computed over 38 cases (those without contrast-enhancement).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- The document states that the segmentations were compared to "manual segmentations."
- It does not specify the number of experts who performed these manual segmentations nor their qualifications (e.g., radiologist with X years of experience).
4. Adjudication Method for the Test Set
- The document only mentions "manual segmentations" as the ground truth. It does not provide any information about an adjudication method (such as 2+1, 3+1, or none) for these manual segmentations. It implies a single manual segmentation was used as the reference.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, the provided text does not describe a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The study focuses solely on the standalone performance of the AI algorithm compared to manual segmentations.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, a standalone performance study was done. The "Algorithm Performance" section details the comparison of the Quantib™ ND algorithm's segmentations and volume measurements against manual segmentations, without human-in-the-loop interaction with the AI.
7. The Type of Ground Truth Used
- The type of ground truth used was expert manual segmentation. The text explicitly states, "To validate the quality of Quantib™ ND volume measurements and segmentations, these were compared to manual segmentations of the same scan and their derived volumes."
8. The Sample Size for the Training Set
- The document does not provide information regarding the sample size used for the training set of the Quantib™ ND algorithm. It only discusses the test sets used for validation.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established. It only details the method for establishing ground truth for the validation/test sets (manual segmentations).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).