Search Results
Found 1 results
510(k) Data Aggregation
(263 days)
ClearPoint Maestro™ Brain Model is intended for automatic labeling, visualization, volumetric and shape quantification of segmentable brain structures from a set of MR images. This software is intended to automate the process of identifying, labelling, and quantifying the volume and shape of brain structures visible in MRI images.
The ClearPoint Maestro™ Brain Model provides automated image processing of brain structures from T1-weighted MR images. Specifically, the device automates the manual process of identifying, labeling, and quantifying the volume and shape of subcortical structures to simplify the workflow for MRI segmentation.
The ClearPoint Maestro™ Brain Model consists of the following key functional modules.
- DICOM Read Module .
- Segmentation Module ●
- Visualization Module ●
- . Exporting Module
The segmented brain structures are color coded and overlayed onto the MR images or be displayed as 3-D triangular mesh representation. The viewing capabilities of the device also provides anatomic orientation labels (left, right, inferior, superior, anterior, posterior), image slice selection, standard image manipulation such as contrast adjustment, rotation, zoom, and the ability to adjust the transparency of the image overlay.
The output from ClearPoint Maestro™ Brain Model can also exported as a report in PDF format. The report also provides a comparison of segmented volume to normative values of brain structures based on reference data.
Here's a breakdown of the acceptance criteria and study details for the ClearPoint Maestro™ Brain Model, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Segmentation Accuracy: | |
| Dice coefficient >0.7 | Met: Mean Dice coefficients for 21 segmented brain structures in 101 subjects were significantly greater than 70%. The only exception was the third ventricle, attributed to manual labeling variability rather than device performance. |
| Relative volume difference <0.3 | Met: Mean relative volume differences for 21 segmented brain structures in 101 subjects were significantly less than 0.3. Exceptions for left and right lateral ventricles in the 18-25yo subgroup were attributed to manual labeling variability. |
| Reproducibility: | |
| Absolute volume differences <15% (repeated scans) | Met: Absolute volume differences using ClearPoint Maestro Brain Model 1.0 on repeated scans (20-Repeat dataset) were less than 10% for all segmented brain regions, including the third ventricle. |
The machine learning derived outputs (Cerebellum GM, Cerebellum WM, L Cortical GM, R Cortical GM, L Cortical WM, R Cortical WM) also met the acceptance criteria with Dice coefficients > 70% and mean relative volume differences < 0.3.
2. Sample Sizes Used for the Test Set and Data Provenance
- Test Set Sample Size: 101 subjects for segmentation accuracy and relative volume difference validation.
- Reproducibility Test Set Sample Size: 20 "repeated scans" (implied as 20 subjects with repeated scans, but not explicitly stated as 20 distinct subjects or scans).
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective). The summary only states that the validation dataset was "completely independent from the training data created by Philips."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated for the validation/test set. The document mentions "manually labeled data" was used as ground truth for the validation set, but does not specify how many experts created this manual labeling or their qualifications.
- Qualifications of Experts: Not specified for the test set ground truth.
4. Adjudication Method for the Test Set
The adjudication method for establishing ground truth for the test set is not explicitly mentioned. It states "manually labeled data" was used, but details on how consensus was reached (e.g., 2+1, 3+1, none) are absent.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not explicitly stated as being performed. The study focuses on the standalone performance of the device against manually labeled data, not on its impact on human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance study was done. The study assesses the device's ability to segment, quantify, and reproduce results compared to manually labeled data, without human interaction with the device's output during the evaluation.
7. The Type of Ground Truth Used
The primary type of ground truth used for the validation (test set) was expert consensus / manual labeling ("manually labeled data T1-weighted MRI data").
8. The Sample Size for the Training Set
The sample size for the training set is not explicitly stated. It mentions: "The training data was created by the three technical experts at Philips Research Hamburg." This implies a dataset was used for training, but its size is not provided.
9. How the Ground Truth for the Training Set was Established
The ground truth for the training set was established by three technical experts at Philips Research Hamburg. The method (e.g., manual segmentation, specific tools, adjudication) is not detailed beyond this.
Ask a specific question about this device
Page 1 of 1