Search Results
Found 1 results
510(k) Data Aggregation
(142 days)
VUNO Med-DeepBrain
The VUNO Med-DeepBrain is intended for automatic labeling, quantification of segmentable brain structures from a set of MR images. The software is intended to automate the current manual process of identifying, labeling and quantifying segmentable brain structures identified on MR images. The users are trained healthcare professionals who work with medical imaging.
The product is used in an office-like environment.
The VUNO Med-DeepBrain provides brain structural information based on the brain MR image. Input images for analysis are 3D T1 weighted brain MR images and 2D T2 flair brain MR images. Once the recommended images are uploaded, automated brain segmentation is performed and provides volumetric data of brain regions. It is displayed in the viewer with a color map.
VUNO Med-DeepBrain is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures and lesions from a set of MR images. It takes a 3D T1 MR image as input and gives segmented brain structures and lesions, and volumetric quantification. The user interface is provided for the visualization. The segmented structures are displayed as a color map and the user can view regions by selecting the name of the region. The 2D T2 Flair MR image is taken for lesion quantification. In addition, the uploaded image can be compared to the normative percentile and prior images when applicable. The user can download and print the result in a report format. The data can be received and sent through the Picture Archive and Communications Systems (PACS) using the DICOM protocol.
Here's a breakdown of the acceptance criteria and study details for the VUNO Med-DeepBrain based on the provided text:
Acceptance Criteria and Device Performance
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Average Dice Similarity Coefficient (DSC) | $\ge$ 0.80 for brain regions | Exceeded criteria for whole brain regions (cortical and subcortical) |
Average Dice Similarity Coefficient (DSC) | $\ge$ 0.80 for White Matter Hyperintensities (WMH) | Exceeded criteria for WMH regions |
Average relative volume error (Hippocampus) | Not explicitly stated | 0.03 mm$^3$ |
Average relative volume error (Thalamus) | Not explicitly stated | 0.01 mm$^3$ |
Average relative volume error (Lateral ventricle) | Not explicitly stated | 0.01 mm$^3$ |
Average absolute volume error (Hippocampus) | Not explicitly stated | 207 mm$^2$ |
Average absolute volume error (Thalamus) | Not explicitly stated | 140 mm$^2$ |
Average absolute volume error (Lateral ventricle) | Not explicitly stated | 377 mm$^2$ |
Intraclass Correlation Coefficient (ICC) | $\ge$ 0.965 for brain structures | Exceeded criteria, indicating excellent reliability |
Intraclass Correlation Coefficient (ICC) | $\ge$ 0.988 for WMH | Exceeded criteria, indicating excellent reliability |
Study Details
1. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated. The document mentions that "Whole brain regions including cortical and subcortical as well as WMH regions exceeded the criteria," but does not provide the number of scans or patients in the test set.
- Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Singular ("compared to expert"). The document indicates ground truth was established by "expert," implying a single expert or a collective "expert opinion" without specifying the number.
- Qualifications of Experts: Not specified. It only states "expert" without further details on their qualifications (e.g., specific medical specialty, years of experience, board certification).
3. Adjudication method for the test set:
- Not explicitly stated. The document mentions comparison to "expert" but does not describe any adjudication process (like 2+1, 3+1, or none) if multiple experts were involved. Given the phrasing "compared to expert," it suggests a comparison against a single reference standard, possibly simplifying the need for adjudication.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on standalone device performance against an expert.
5. (If MRMC was done) Effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable, as an MRMC study comparing human readers with and without AI assistance was not reported.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance test was done. The "Segmentation Accuracy Test and Reproducibility Test" were conducted to evaluate the device's performance directly by comparing its output to an expert's segmentation ("compared to expert"). This represents the algorithm's performance without direct human intervention in the interpretation process.
7. The type of ground truth used:
- Expert Consensus / Manual Segmentation: The ground truth for segmentation accuracy was established by an "expert" (or experts) through manual segmentation, as the device's output (Dice Similarity Coefficient and volume errors) was compared against this expert's work. The document mentions "volume errors between manual segmentation and device output are analyzed," explicitly indicating manual segmentation as the reference.
8. The sample size for the training set:
- Not explicitly stated. The document mentions the device learns from "a large dataset" but does not provide the specific sample size for the training set.
9. How the ground truth for the training set was established:
- Not explicitly stated. The document mentions that the algorithm is "based on the machine technique in which the device learns the characteristics of brain MR images from a large dataset" and that "The subject device is based on the region parcellation principle of FreeSurfer, which is a silver standard." This implies that the training data likely had ground truth labels derived from expert-curated or FreeSurfer-processed segmentations, but the specific method for establishing this ground truth (e.g., manual annotation by experts, automated methods like FreeSurfer subsequently reviewed) for the training set is not detailed.
Ask a specific question about this device
Page 1 of 1