Search Results
Found 1 results
510(k) Data Aggregation
(270 days)
CDM Insights is a post-processing image analysis software that assists trained healthcare practitioners in viewing. analyzing, and evaluating MR brain images of adults > 45 years of age.
CDM Insights provides the following functionalities:
- Automated segmentation and quantitative analysis of individual brain structures and white matter hyperintensities
- Quantitative comparison of brain structures and derived values with normative data from a healthy population
- Presentation of results for reporting that includes numerical values as well as visualization of these results
CDM Insights is automated post-processing medical device software that is used by radiologists, neurologists, and other trained healthcare practitioners familiar with the post-processing of magnetic resonance images. It accepts DICOM images using supported protocols and performs: automatic segmentation and quantification of brain structures and lesions, automatic post-acquisition analysis of diffusionweighted magnetic resonance imaging (DWI) data, and comparison of derived image metrics from multiple time-points.
The values for a given patient are compared against age-matched percentile data from a population of healthy reference subjects. White matter hyperintensities can be visualized and quantified by volume. Output of the software provides numerical values and derived data as graphs and anatomical images with graphical color overlays.
CDM Insights output is provided in standard DICOM format as a DICOMencapsulated PDF report.
The provided text describes the acceptance criteria and the study that proves the device (CDM Insights) meets these criteria. Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
Measure | Acceptance Criteria (from primary predicate) | Reported Device Performance |
---|---|---|
Accuracy of Segmentation for White Matter Hyperintensities (WMH) | Mean Dice overlap score ≥ 0.58 | Mean Dice overlap score = 0.66 (SD = 0.15) |
Accuracy of Segmentation for Cortical Regions | Mean Dice overlap score ≥ 0.58 for each region | Orbito-frontal: 0.58 (0.10) |
Superior-frontal: 0.72 (0.05) | ||
Sensorimotor: 0.69 (0.14) | ||
Ventral-temporal: 0.58 (0.05) | ||
Anterior-cingulate: 0.60 (0.09) | ||
Precuneus: 0.58 (0.08) | ||
Lateral-occipital: 0.59 (0.11) | ||
Medial-occipital: 0.63 (0.06) | ||
Visual Ratings of Segmentation Quality and Cortical Surface Quality | Not explicitly stated in numerical form, but implied "good" or "excellent" rating for acceptance | Typically rated by neuroradiologists as "good" or "excellent" |
Repeatability of Measurements | Not explicitly stated in numerical form but implied successful confirmation | Confirmed on a total of 121 healthy individuals with two or three repeated MRI scans. |
Reproducibility Across MRI Scanner Models and Protocols | Not explicitly stated in numerical form but implied successful quantification | Quantified across a range of MRI scanner model and protocol parameters using scans from over 1500 unique subjects. |
Accuracy of Percentiles (of normative data) | Not explicitly stated in numerical form but implied successful testing | Tested with almost 2000 test scans. |
2. Sample Sizes and Data Provenance
- Test Set Sample Size:
- Accuracy of segmentation: 60 cases for brain region and WMH segmentation accuracy.
- Repeatability: 121 healthy individuals.
- Reproducibility: Over 1500 unique subjects.
- Accuracy of percentiles: Almost 2000 test scans.
- Data Provenance: The data included scans acquired on different scanner models from multiple manufacturers. It included scans from a group of cognitively healthy individuals and a mix of individuals with disorders (Alzheimer's disease, mild cognitive impairment, frontotemporal dementia, multiple sclerosis). Data were obtained from 13 different source cohorts, with 7 of these based in the USA. The text indicates that information was available on race or ethnicity for the majority of individuals, with more than 20% non-white and more than 5% Hispanic. The studies included both retrospective (existing scans) and potentially some prospective components (implied by "repeated MRI scans" for repeatability) but primarily seems based on existing datasets.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated as a number, but referred to as "US board-certified neuroradiologists."
- Qualifications of Experts: US board-certified neuroradiologists. Their years of experience are not specified.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It states "tested against a gold standard of US board-certified neuroradiologists," implying their consensus or individual expert delineation as the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not mention a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance. The study focuses on the standalone performance of the algorithm.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance study was done. The performance metrics (Dice scores, visual ratings) are presented for the algorithm's output directly against expert-established ground truth, without a human-in-the-loop component described in these performance tests.
7. Type of Ground Truth Used
The ground truth used for accuracy assessments (segmentation and cortical surfaces) was established by expert consensus/delineation (a "gold standard of US board-certified neuroradiologists").
8. Sample Size for the Training Set
The "almost 2000 test scans" for percentile accuracy are explicitly stated to be "independent of training scans used to derive percentiles." However, the sample size for the training set itself is not specified in this document.
9. How Ground Truth for Training Set Was Established
The document states that the percentiles were "used to derive percentiles," implying that the training set was used to construct the normative data. However, the method for establishing ground truth within that training set (e.g., for segmentation, if those were part of the training) is not detailed. It's implied that the normative data itself serves as the "ground truth" for the percentile comparisons.
Ask a specific question about this device
Page 1 of 1