Search Results
Found 1 results
510(k) Data Aggregation
(91 days)
ANDI is intended for the display of medical images and other healthcare data. It includes functions for processing MR images, atlas-assisted visualization, segmentation, and volumetric quantification of segmentable brain structures. The output is generated for use by a system capable of reading DICOM image sets.
The information presented by ANDI does not provide prediction, diagnosis, or interpretation of brain health. Clinical interpretation and decision-making are the responsibility of the physician, who must review all clinical information associated with a patient in order to make a diagnosis and to determine the next steps in the clinical care of the patient.
Typical users of ANDI are medical professionals, including but not limited to neurologists and radiologists. ANDI should be used only as adjunctive information. The decision made by trained medical professionals will be considered final.
ANDI is software as a medical device (SaMD) that can be deployed on a cloud-based system, or installed on-premises. It is delivered as software as a service (SaaS) and operates without a graphical user interface. The software can be used to perform DICOM image viewing, image processing, and analysis, specifically designed to analyze brain MRI data. It processes diffusion-weighted and T1-weighted images to quantify and visualize white matter microstructure, providing adjunctive information to aid clinical evaluation. An optional AI-based segmentation feature enables quantification of the volume of gray matter regions. The results are output in a report that presents reference information to assist trained medical professionals in clinical decision-making by enabling comparisons between a patient's data, a normative database, and the patient's longitudinal data.
The document is a 510(k) clearance letter for ANDI 2.0. The device is a "Medical image management and processing system" that processes MR images for atlas-assisted visualization, segmentation, and volumetric quantification of segmentable brain structures. It provides adjunctive information to aid clinical evaluation, with the final clinical interpretation and decision-making remaining the responsibility of the physician.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance:
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Accuracy and Robustness of Brain Regions Segmentation (Dice Coefficient) | |
| ≥ 0.75 for major subcortical brain structures | Average Dice coefficients ranged from 0.89 to 0.96 for major subcortical brain structures. |
| ≥ 0.8 for major cortical brain structures | Average Dice coefficients ranged from 0.79 to 0.93 for major cortical brain structures. |
| Reproducibility of Brain Region Segmentation (Maximum Absolute Volume Difference) | |
| Maximum absolute volume difference below 7% | Mean absolute volume difference of 2.1% across major cortical and subcortical brain structures, with individual structures ranging from 1.2% to 3.9%. |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
-
Accuracy and Robustness Test Set:
- Sample Size: 71 subjects.
- Demographics: 35 females, 36 males; age range 18-86 years.
- Geographic Origin: 38 subjects were of USA origin.
- Health Status: 35 healthy subjects; 36 diseased subjects (Multiple Sclerosis (n=11), Parkinson's disease (n=12), Alzheimer's disease (n=12), mild cognitive impairment (n=1)).
- Data Provenance: The document does not explicitly state if this data was retrospective or prospective, but it implies pre-existing data with the phrase "images preprocessed by ANDI". The selection process (stratified by age, gender, pathology, MRI manufacturer, and field strength) suggests a retrospective collection to represent a diverse population.
-
Reproducibility Test Set:
- Sample Size: 59 subjects (with 2 timepoints each).
- Demographics: 30 females, 29 males; age range 23-86 years.
- Geographic Origin: 38 subjects were of USA origin.
- Health Status: Only healthy subjects were selected to avoid bias from disease progression.
- Data Provenance: The document does not explicitly state if this data was retrospective or prospective. The mention of "2 timepoints" indicates longitudinal data, which could be from either prospective follow-ups or retrospectively re-analyzed longitudinal datasets.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: A panel of 3 board-certified neuroradiologists.
- Qualifications: Board-certified neuroradiologists.
- Process: 71 preprocessed T1 images were first pre-segmented using Freesurfer v7.4.1. The resulting segmentations were then manually corrected by "an expert" (singular, qualifications not specified, but likely a trained individual) and subsequently "approved" by the panel of 3 board-certified neuroradiologists.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
The adjudication method for establishing ground truth was a consensus-based approval process by a panel of 3 board-certified neuroradiologists, following initial manual correction by a single expert. This can be seen as a form of expert consensus and approval, but not a specific "2+1" or "3+1" voting method as typically applied when multiple readers independently rate and then adjudicate. Here, the neuroradiologists approved an already corrected segmentation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not done. The document explicitly states: "No clinical studies were considered necessary and performed." The performance testing focused on the standalone algorithm's accuracy and reproducibility against an expert-approved ground truth.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, a standalone algorithm-only performance study was done. The sections "AI / ML performance data" detail the evaluation of "ANDI 2.0's brain regions segmentation" against an "expert approved ground truth" using Dice coefficients and volume differences. This assesses the algorithm's performance independent of real-time human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The ground truth used for the test set was expert consensus, derived from Freesurfer pre-segmentations, manually corrected by an expert, and then approved by a panel of 3 board-certified neuroradiologists.
8. The sample size for the training set:
- AI/ML Module Training Set: 140 representative subjects.
- AI/ML Module Validation Set: 747 independent subjects.
9. How the ground truth for the training set was established:
The document states that the device incorporates a "pretrained third-party brain segmentation algorithm." It mentions that this algorithm was "subjected to training using 140 representative subjects" and "Validation data included 747 independent subjects." However, the document does not explicitly describe how the ground truth for this training or validation set was established by the third party. It only mentions that the data for evaluation of ANDI's integration of this algorithm was independent ("ensuring data independence since ANDI-preprocessed images were not available for the training of the algorithm by the third-party algorithm").
Ask a specific question about this device
Page 1 of 1