Search Results
Found 3 results
510(k) Data Aggregation
(122 days)
NeuroQuant
NeuroQuant is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures and lesions from a set of MR images. Volumetric measurements may be compared to reference percentile data.
NeuroQuant is a fully automated MR imaging post-processing software medical device that provides automatic labeling, visualization, and volumetric quantification of brain structures and lesions from a set of MR images and returns segmented images and morphometric reports.
NeuroQuant provides morphometric measurements of brain structures based on a 3D T1 MRI series. The optional use of the T2 FLAIR MR series and T2* GRE/SWI series allows for additional quantification of T2 FLAIR hyperintense lesions and T2* GRE/SWI hypointense lesions.
The device is used by medical professionals in imaging centers, hospitals, and other healthcare facilities as well as by clinical researchers. When used clinically, the output must be reviewed by a radiologist or neuroradiologist. The results are typically forwarded to the referring physician, most commonly a neurologist. The device is a "Prescription Device" and is not intended to be used by patients or other untrained individuals.
From a workflow perspective, the device is packaged as a computing appliance that is capable of supporting DICOM standard input and output. NeuroQuant supports data from all major MRI manufacturers and a variety of field strengths. For best results, scans should be acquired using specified protocols provided by CorTechs Labs.
As part of processing, the data is corrected by NeuroQuant for image acquisition artifacts, including gradient nonlinearities and bias field inhomogeneity, to improve overall image quality.
Next, image baseline intensity levels for gray and white matter are identified and corrected for scanner variability. The scan is then aligned with the internal anatomical atlas by a series of transformations. Probabilistic methods and neural network models are then used to label each voxel with an anatomical structure based on location and signal intensities.
Output of the software provides values as numerical volumes, and images of derived data as grayscale intensity maps and as color overlays on top of the anatomical image. The outputs are provided in standard DICOM format as image series and reports that can be displayed on many commercial DICOM workstations.
The software is designed without the need for a user interface after installation. Any processing errors are reported either in the output series error report or system log files.
The software can provide data on age and gender-matched normative percentiles. The default reference percentile data for NeuroQuant comprises normal population data.
The device provides DICOM Storage capabilities to receive MRI series in DICOM format from an external source, such as an MRI scanner or PACS server. The device provides transient data storage only. If additional scans from other time points are available, the software can perform change analysis.
Here's a breakdown of the acceptance criteria and the study details for the NeuroQuant device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Model | Acceptance Criteria | Reported Device Performance | Metric |
---|---|---|---|
Brain Segmentation Model | Performance against predicate device (meets accuracy and reproducibility criteria) | Meets acceptance criteria for accuracy and reproducibility (details not explicitly stated beyond "meets acceptance criteria") | Dice Similarity Coefficient (DSC) |
FLAIR Lesion Segmentation Model | Mean DSC ≥ 0.50 and standard deviation ≤ 0.18 | Mean DSC of 0.70 with a standard deviation of 0.14 | Dice Similarity Coefficient (DSC) |
MCH Detection Model | Median F1 Score ≥ 0.51 | Median F1 Score of 0.60 | F1 Score |
2. Sample Sizes Used for the Test Set and Data Provenance
- Brain Segmentation Model:
- Test Set Size: 30 patients
- Data Provenance: Curated to represent diverse patient population across the United States. Type of study (retrospective/prospective) and specific countries of origin within the US are not specified, but it implies retrospective data collection from diverse institutions within the US.
- FLAIR Lesion Segmentation Model:
- Test Set Size: 63 patients
- Data Provenance: Curated to represent diverse patient population across the United States. Type of study (retrospective/prospective) not specified, but implies retrospective data collection from diverse institutions within the US (data acquired across Philips, GE, and Siemens scanners).
- MCH Detection Model:
- Test Set Size: 117 patients
- Data Provenance: Curated to represent diverse patient population across the United States. Type of study (retrospective/prospective) not specified, but implies retrospective data collection from diverse institutions within the US (data acquired across Philips, GE, and Siemens scanners).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their detailed qualifications (e.g., radiologist with 10 years experience) for establishing the ground truth of the test sets. It broadly states that the software was validated against "known ground truth values" and "gold standard - computer-aided expert manual segmentation," but provides no specifics on the human experts involved in generating this ground truth for the test sets.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1) for the ground truth of the test sets. It only refers to a "gold standard - computer-aided expert manual segmentation."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it quantify how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the algorithms.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, standalone performance was done. The performance metrics (Dice Similarity Coefficient, F1 Score) are measurements of the algorithm's output compared to a reference ground truth, indicating a standalone analysis. The document states that the results "must be reviewed by a trained physician," implying the device is a tool to assist, but the evaluation of the device itself focuses on its automated output.
7. The Type of Ground Truth Used
The ground truth for the test sets was established using "known ground truth values" and the "gold standard - computer-aided expert manual segmentation." This implies that human experts, potentially assisted by software tools, manually segmented or labeled the structures to create the reference standard for evaluation.
8. The Sample Size for the Training Set
- Brain Segmentation Model: Trained on 1,473 3D T1-weighted MRI series.
- FLAIR Lesion Segmentation Model: Developed using a training set of 340 T1 and FLAIR MRI series.
- MCH Detection Model: Trained on 463 2D T2*GRE/SWI MRI series.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly detail how the ground truth for the training sets was established. It describes the data sources (diverse MRI series from various institutions) and mentions the use of "probabilistic methods and neural network models" for labeling in the device's processing, which implies that these models learn from some form of labeled or pre-segmented data. Given the "computer-aided expert manual segmentation" mentioned for ground truth in performance testing, it's highly probable that similar methods were used for generating labels for the training data, but this is not explicitly stated.
Ask a specific question about this device
(157 days)
NeuroQuant
Meuro Quant is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures and lesions from a set of MR images. Volumetric measurements may be compared to reference percentile data.
NeuroQuant is a fully automated MR imaging post-processing medical device software that provides automatic labeling, visualization and volumetric quantification of brain structures and lesions from a set of MR images and returns segmented images and morphometric reports. The resulting output is provided in a standard DICOM format as additional MR series with segmented color overlays and morphometric reports that can be displayed on third-party DICOM workstations and Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in both clinical trial research and routine patient care as a support tool for clinicians in assessment of structural MRIs.
NeuroQuant provides morphometric measurements based on 3D T1 MRI series. The output of the software includes volumes that have been annotated with color overlays, with each color representing a particular segmented region, and morphometric reports that provide comparison of measured volumes to age and gender-matched reference percentile data. In addition, the adjunctive use of the T2 FLAIR MR series allows for improved identification of some brain abnormalities such as lesions, which are often associated with T2 FLAIR hyperintensities.
The NeuroQuant processing architecture includes a proprietary automated internal pipeline that performs artifact correction, atlas-based segmentation, volume calculation and report generation.
Additionally, automated safety measures include automated quality control functions, such as tissue contrast check, atlas alignment check and scan protocol verification, which validate that the imaging protocols adhere to system requirements.
From a workflow perspective, NeuroQuant is packaged as a computing appliance that is capable of supporting DICOM file transfer for input and output of results.
The provided text describes the 510(k) summary for the NeuroQuant device (K170981). Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the performance statistics reported. While explicit acceptance thresholds are not given in a "PASS/FAIL" format, the document presents quantitative results from the performance testing.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Segmentation Accuracy (Dice's Coefficient): | |
- Major Subcortical Structures (compared to expert manual) | In the range of 80-90% |
- Major Cortical Regions (compared to expert manual) | In the range of 75-85% |
- Brain Lesions (T1 and T2 FLAIR, compared to expert manual) | Exceeds 80% |
Segmentation Reproducibility (Percentage Absolute Volume Differences): | |
- Major Subcortical Structures (repeated T1 MRI scans) | Mean percentage absolute volume differences were in the range of 1-5% |
- Brain Lesions (repeated T1 and T2 FLAIR MRI scans) | Mean absolute lesion volume difference was less than 0.25cc, while the mean percentage lesion absolute volume difference was less than 2.5%. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the sample size used for the test set. It mentions "3D T1 MRI scans" and "3D T1 and T2 FLAIR MRI scan pairs of subjects with brain lesions" were used for evaluation.
The document does not specify the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states that segmentation accuracy was evaluated by "comparing segmentation accuracy with expert manual segmentations." However, it does not specify the number of experts used or their qualifications (e.g., radiologist with 10 years of experience).
4. Adjudication Method for the Test Set
The document mentions "expert manual segmentations" as the ground truth, but it does not describe any adjudication method (e.g., 2+1, 3+1, none) used to establish this ground truth among multiple experts if more than one was involved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size of how human readers improve with or without AI assistance. The performance testing focuses solely on the device's accuracy and reproducibility against manual segmentation and repeated scans.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was done. The "Performance Testing" section describes how "NeuroQuant performance was evaluated by comparing segmentation accuracy with expert manual segmentations and by measuring segmentation reproducibility between same subject scans." This refers to the algorithm's performance directly, independent of a human reader's interaction with the output for primary diagnosis.
7. The Type of Ground Truth Used
The ground truth used for the segmentation accuracy evaluation was "expert manual segmentations." For reproducibility, the ground truth was the measurements from repeated scans of the same subjects, with the expectation that the device produces consistent results on these repeated scans.
8. The Sample Size for the Training Set
The document does not specify the sample size used for the training set. It describes the device's "proprietary automated internal pipeline that performs... atlas-based segmentation," and "dynamic probabilistic neuroanatomical atlas, with age and gender specificity." This implies a trained model, but the size of the dataset used for this training is not disclosed.
9. How the Ground Truth for the Training Set Was Established
The document states the device uses "atlas-based segmentation" and a "dynamic probabilistic neuroanatomical atlas, with age and gender specificity." This suggests the training involves the creation or utilization of an anatomical atlas, which typically involves expert anatomical labeling and segmentation of a representative set of MR images to build probabilities for different brain regions. However, the specific methodology for establishing this ground truth for the training set (e.g., number of experts, their qualifications, adjudication) is not detailed in this summary.
Ask a specific question about this device
(41 days)
NEUROQUANT
NeuroQuant™ is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of MR images. This software is intended to automate the current manual process of identifying, labeling and quantifying the volume of segmental brain structures identified on MR images.
NeuroQuant™ Medical Image Processing Software
The provided text is a 510(k) clearance letter from the FDA for the NeuroQuant™ Medical Image Processing Software. It does not contain specific details about the acceptance criteria or a study proving the device meets those criteria. Such information is typically found in the 510(k) summary or the full submission, which is not provided here.
Therefore, I cannot extract the requested information from the given text. The document only confirms that the device has been found substantially equivalent to a predicate device and can be marketed.
Ask a specific question about this device
Page 1 of 1