Search Results
Found 1 results
510(k) Data Aggregation
(359 days)
The CorInsights MRI Medical Image Processing Software is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures from MRI images. Volumetric measurements are compared to reference percentile data. CorInsights MRI is for adults age 45 to 95.
CorInsights MRI is a fully automated MR medical image processing software intended for automatic labeling, visualization and volumetric quantification of identifiable brain structures from DICOM formatted magnetic resonance images. The resulting output consists of a pdf report for review, which can be used in research and clinical use, and a DICOM image showing the anatomical structure boundaries identified by the software. The proposed device provides morphometric measurements based on T1 MRI series. The output of this software only device includes morphometric reports that provide comparison of measured volumes to age and gender-matched reference data and an image volume that has been annotated with color overlays representing each segmented region. The architecture has a proprietary automated internal process that includes artifact correction, atlas-based segmentation, volume calculation, and report generation. Quality control measures include automated quality control including image header checks to verify that the scan acquisition protocol and provided data adhere to system requirements, an image morphometry check, a tissue contrast check, and value range checks.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" as a separate, pre-defined column with specific numeric thresholds. Instead, it describes accuracy and reproducibility goals in terms of achieving certain statistical measures (ICC, DICE coefficient, mean absolute percentage difference) against ground truth or in test-retest scenarios.
I will infer the "acceptance criteria" based on the reported "Performance" values, as these are the results presented to demonstrate the device's capability against known ground truths or expected variations.
| Category | Specific Measure | Acceptance Criteria (Implied from Reported Performance) | Reported Device Performance |
|---|---|---|---|
| Accuracy | Hippocampal Volume | ||
| IntraClass Correlation (ICC) | ICC >= 0.95 (Highly correlated with ground truth) | 0.95 | |
| DICE Coefficient (Left) | DICE >= 83% (Good overlap with ground truth) | 83% (SD 2.5%) | |
| DICE Coefficient (Right) | DICE >= 83% (Good overlap with ground truth) | 83% (SD 2.7%) | |
| Mean Absolute % Difference (Left) | MAD <= 6.2% (Low deviation from ground truth) | 6.2% (SD 4.4%) | |
| Mean Absolute % Difference (Right) | MAD <= 5.9% (Low deviation from ground truth) | 5.9% (SD 5.1%) | |
| Cortical Segmentation (Total Gray Volume) | |||
| IntraClass Correlation (ICC) | ICC >= 0.99 (Highly correlated with ground truth) | 0.99 | |
| DICE Coefficient | DICE >= 95% (Excellent overlap with ground truth) | 95% (SD 1.6%) | |
| Mean Absolute % Difference | MAD <= 4.5% (Very low deviation from ground truth) | 4.5% (SD 1.8%) | |
| Cortical Subregions | |||
| DICE Coefficient Range | DICE (range) 81-93% (Good to excellent overlap) | 81-93% | |
| Mean Absolute % Difference Range | MAD (range) 4.6-13.8% (Low to moderate deviation) | 4.6% to 13.8% | |
| Intracranial Volume (ICV) | |||
| IntraClass Correlation (ICC) | ICC >= 0.89 (Strong correlation with ground truth) | 0.89 | |
| DICE Coefficient | DICE >= 95% (Excellent overlap with ground truth) | 95% (SD 1.1%) | |
| Mean Absolute % Difference | MAD <= 5.2% (Low deviation from ground truth) | 5.2% (SD 4.4%) | |
| Ventricular Accuracy (Left & Right) | |||
| IntraClass Correlation (ICC) | ICC >= 0.98 (Very strong correlation with ground truth) | 0.98 (left and right) | |
| DICE Coefficient (Left) | DICE >= 88% (Good overlap with ground truth) | 88% (SD 5.2%) | |
| DICE Coefficient (Right) | DICE >= 87% (Good overlap with ground truth) | 87% (SD 5.6%) | |
| Mean Absolute % Difference (Left) | MAD <= 13.9% (Moderate deviation from ground truth) | 13.9% (SD 9.2%) | |
| Mean Absolute % Difference (Right) | MAD <= 15.2% (Moderate deviation from ground truth) | 15.2% (SD 9.9%) | |
| Reproducibility | |||
| Average IntraClass Correlation (ICC) | ICC >= 0.97 (Very high test-retest consistency) | 0.97 | |
| DICE Coefficient | DICE >= 89% (Good test-retest overlap) | 89% (SD 4.0%) | |
| Mean Absolute % Difference Range (Across all volumes) | MAD (range) 0.7-5.8%, Average <= 2.3% (Very low test-retest variation) | 0.7% to 5.8%, Average 2.3% (SD 2.7%) |
2. Sample Size Used for the Test Set and Data Provenance
- Hippocampal Volume Accuracy: 80 subjects from the HaRP database.
- Cortical Segmentation Accuracy: 80 subjects from a variety of databases.
- Intracranial Volume (ICV) and Ventricular Accuracy: An additional 70 subjects from a variety of databases.
Total Test Set Size (approximate, with potential overlap for "variety of databases"): 80 (hippocampal) + 80 (cortical) + 70 (ICV/ventricular) = 230 subjects. The text later states "In total, more than 1,400 scans from over 1,100 individuals were used in testing of CorInsights MRI," implying that the 230 subjects mentioned above are part of this larger testing pool.
Data Provenance:
The text refers to "the HaRP database" (Boccardi, et al.) and "a variety of databases." It also mentions testing scenarios including "normal subject scans, as well as data sets expected to have below normal gray tissue volumes or above normal ventricle volumes based upon well-established literature."
While specific countries are not mentioned, the use of "HaRP database" (likely referring to the Hippocampal atrophy in patients with dementia research project which is international) and "variety of databases" suggests a diverse, likely multi-center, collection. The study appears to be retrospective, as it uses existing databases with pre-established ground truths.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Not explicitly stated as a specific number. The text mentions "manual ground truth segmentation generated by neuroanatomy experts."
- Qualifications of Experts: "Neuroanatomy experts." No specific years of experience or titles (e.g., radiologist) are provided for these experts.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (such as 2+1 or 3+1 consensus) for establishing the ground truth. It simply states that "manual ground truth segmentation generated by neuroanatomy experts" was used. This suggests that the ground truth for the test sets was derived from these experts' segmentations, but the process for resolving disagreements among multiple experts (if multiple were used per case) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported. The performance testing focuses solely on the device's algorithmic performance against established ground truth volumes and its reproducibility. There is no mention of human readers improving with or without AI assistance, which would be the focus of an MRMC study. A clinical investigation was explicitly stated as "not required."
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the reported study is a standalone performance assessment of the CorInsights MRI algorithm. All accuracy and reproducibility metrics (ICC, DICE, MAD) compare the algorithm's output directly to the "ground truth volumes" or between multiple runs of the algorithm, without involving human interpretation in the performance evaluation.
7. The Type of Ground Truth Used
The primary type of ground truth used for the test set was expert consensus/manual segmentation.
- For hippocampal volume, it was derived from "80 subjects from the HaRP database as ground truth (Boccardi, et al.)," which is typically manually segmented and curated.
- For cortical segmentation, it was "manual ground truth segmentation generated by neuroanatomy experts."
- Similarly for ICV and ventricular accuracy, it was "manual ground truth segmentation generated by neuroanatomy experts."
Additionally, for the "Validation of Volume Measurement in Clinically Relevant Cases," the device's measurements were compared against "percentile and z-score ranges expected based upon peer reviewed published literature for these data sets," which could be considered outcomes data or established clinical expectations based on literature.
8. The Sample Size for the Training Set
The document does not explicitly state the sample size of the training set used to develop the CorInsights MRI algorithm. It only details the sample size for the "Normative Reference Database Development," which lists 269 male and 331 female individuals (total 600 individuals). This database was used to establish reference percentile data for clinical comparison, but it's not explicitly stated if or how this specific cohort was used in the training of the segmentation algorithm itself.
9. How the Ground Truth for the Training Set Was Established
Since the training set size is not explicitly stated, the method for establishing its ground truth is also not detailed.
However, for the "Normative Reference Database Development" (which might overlap with training or be used for post-segmentation comparison), the ground truth for the "normal" cohort was established through:
- Clinical diagnosis of being cognitively normal.
- Confirmation of being negative for amyloid pathology.
- Confirmation of being negative for other potential confounding abnormalities (overt vascular disease, stroke, tumor, normal pressure hydrocephalus, no history of traumatic brain injury or severe neuropsychiatric illness).
This thorough screening process aimed to ensure the reference data represented genuinely normal values. It's an "expert-defined" ground truth based on clinical criteria and screening rather than manual segmentation for training purposes.
Ask a specific question about this device
Page 1 of 1