Search Results
Found 3 results
510(k) Data Aggregation
(88 days)
Neurophet AQUA is intended for Automatic labeling, visualization and volumetric quantification of segmentable brain structures and lesions from a set of MR images. Volumetric data may be compared to reference percentile data.
Neurophet AQUA is a fully automated MR imaging post-processing medical device software that provides automatic labeling, visualization, and volumetric quantification of brain structures from a set of MR images and returns segmented images and morphometric reports. The resulting output is provided in morphometric reports that can be displayed on Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in routine patient care as a support tool for clinicians in assessment of structural MRIs.
Neurophet AQUA provides morphometric measurements based on T1 MRI series. The output of the software includes volumes that have been annotated with color overlays, with each color representing a particular segmented region, and morphometric reports that provide comparison of measured volumes to age and gender-matched reference percentile data. In addition, the adjunctive use of the T2 FLAIR MR series allows for improved identification of some brain abnormalities such as lesions, which are often associated with T2 FLAIR hyperintensities.
Neurophet AQUA processing architecture includes a proprietary automated internal pipeline that performs segmentation, volume calculation and report generation.
The results are displayed in a dedicated graphical user interface, allowing the user to:
- Browse the segmentations and the measures,
- Compare the results of segmented brain structures to a reference healthy population,
- Read and print a PDF report
Additionally, automated safety measures include automated quality control functions, such as scan protocol verification. which validate that the imaging protocols adhere to system requirements.
I am sorry, but the provided text does not contain a table of acceptance criteria or details about a multi-reader, multi-case (MRMC) comparative effectiveness study. Therefore, I cannot generate the information you requested regarding those specific points.
However, I can extract other relevant information about the device's performance study:
1. Table of Acceptance Criteria and Reported Device Performance:
Metric | Acceptance Criteria (Implied by comparison to predicate/reference) | Reported Device Performance (Neurophet AQUA V3.1) |
---|---|---|
T2 FLAIR Lesion Segmentation Accuracy | Exceeds 0.80 (Dice's coefficient) | Exceeds 0.80 (Dice's coefficient) |
T2 FLAIR Lesion Segmentation Reproducibility | Less than 0.25cc (Mean absolute lesion volume difference) | Less than 0.25cc (Mean absolute lesion volume difference) |
All other performance metrics (T1 image analysis) | Same as previous Neurophet AQUA v2.1 (K220437) | Same as previous Neurophet AQUA v2.1 (K220437) |
(Note: The acceptance criteria for the T2 FLAIR analysis features are implied to be met because the text states, "The test results meet acceptance criteria based on the performance of the reference device, NeuroQuant v2.2 (K170981)." However, the exact numerical acceptance criteria for the reference device are not explicitly provided in the text. For the purpose of this table, the reported device performance is used as the implied acceptance criterion for the new T2 FLAIR features, as the device is stated to meet them.)
2. Sample sizes used for the test set and data provenance:
- Accuracy test dataset: 136 images
- Reproducibility test dataset: 52 images
- Data Provenance: Primarily sourced from U.S. hospitals. Multi-site data collection.
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of experts: Three
- Qualifications of experts: U.S.-based neuroradiologists. (Specific years of experience are not mentioned).
4. Adjudication method for the test set:
- Adjudication method: Ground truth was established by "consensus among three U.S.-based neuroradiologists." This suggests a consensus-based adjudication, but the specific mechanics (e.g., majority vote, discussion until agreement) are not detailed (e.g., 2+1, 3+1 are not specified, just "consensus").
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs without AI assistance. The performance data focuses on the algorithm's accuracy and reproducibility against expert manual segmentation.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, performance was evaluated in a standalone (algorithm only) manner. The text describes comparing the device's segmentation accuracy and reproducibility directly with expert manual segmentations.
7. The type of ground truth used:
- Type of ground truth: Expert manual segmentations. The text states: "Neurophet AQUA performance was then evaluated by comparing segmentation accuracy with expert manual segmentations..."
8. The sample size for the training set:
- The text does not specify the sample size for the training set. It only mentions the subjects upon whom the device was "trained and tested" as including healthy subjects, mild cognitive impairment patients, Alzheimer's disease patients, and multiple sclerosis patients from young adults to elderlies.
9. How the ground truth for the training set was established:
- The text does not explicitly state how the ground truth for the training set was established. It only discusses the ground truth establishment for the test set.
Ask a specific question about this device
(448 days)
Neurophet AQUA is intended for Automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of MR images. Volumetric data may be compared to reference percentile data.
Neurophet AQUA is a fully automated MR imaging post-processing medical device software that provides automatic labeling, visualization, and volumetric quantification of brain structures from a set of MR images and returns segmented images and morphometric reports. The resulting output is provided in morphometric reports that can be displayed on Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in both clinical trial research and routine patient care as a support tool for clinicians in assessment of structural MRIs. Neurophet AQUA provides morphometric measurements based on T1 MRI series. The output of the software includes volumes that have been annotated with color overlays, with each color representing a particular segmented region, and morphometric reports that provide comparison of measured volumes to age and gender-matched reference percentile data. Neurophet AQUA processing architecture includes a proprietary automated internal pipeline that performs segmentation, volume calculation and report generation. The results are displayed in a dedicated graphical user interface, allowing the user to: Browse the segmentations and the measures, Compare the results of segmented brain structures to a reference healthy population, Read and print a PDF report Additionally, automated safety measures include automated quality control functions, such as tissue contrast check, scan protocol verification, which validate that the imaging protocols adhere to system requirements.
Here's a breakdown of the acceptance criteria and the study details for Neurophet AQUA, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance (Neurophet AQUA) |
---|---|
Segmentation Accuracy (Dice's Coefficient for major subcortical brain structures) | In the range of 80-90% |
Segmentation Accuracy (Dice's Coefficient for major cortical regions) | In the range of 75-85% |
Reproducibility (Mean percentage absolute volume differences for all major subcortical structures) | In the range of 1-5% |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Accuracy Test Set: 64 T1 scans (36 US-based, 40 females, age range 20-90)
- Sample Size for Reproducibility Test Set: 50 repeated T1 scans (31 US-based, 23 females, age range 10-90)
- Data Provenance: Both test sets included retrospective data from various sources:
- 36 of 64 (56%) scans for accuracy were US-based data.
- 31 of 50 (62%) scans for reproducibility were US-based data.
- The test sets included cognitive normal, mild cognitive impairment, and Alzheimer's disease patients.
- Data was acquired from MR scanners of three main vendors (Siemens, Phillips, and GE).
- The document explicitly states that "All the testing data was exclusive from the training dataset."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- The text states, "Ground-truth data were initially generated using FreeSurfer (General Hospital Corporation, Boston, MA, USA, version 6.0) and verified and corrected by four radiologists."
- Qualifications of Experts: Four radiologists. Specific experience level (e.g., years of experience) is not provided in the document.
4. Adjudication Method for the Test Set
- The ground truth for the training set was initially generated by FreeSurfer and then "verified and corrected by four radiologists." This implies a consensus or expert review process, where the radiologists likely reviewed and refined the FreeSurfer outputs. The specific adjudication method (e.g., 2+1, 3+1) is not explicitly detailed, but it indicates multiple expert review and correction.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not reported in this summary. The study focused on the standalone performance of the AI device against expert manual segmentations (ground truth) and reproducibility.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance study was conducted. The performance metrics (Dice's coefficient and percentage absolute volume differences) directly evaluate the algorithm's output against the established ground truth without involving human-in-the-loop performance for the reported results. The device "yields reproducible results that are well correlated with expert manual segmentations," indicating an evaluation of the device's output itself.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus / semi-automated expertise. It was "initially generated using FreeSurfer" (a widely-accepted brain segmentation software) and then "verified and corrected by four radiologists."
8. The Sample Size for the Training Set
- 300 T1-weighted MRI scans.
9. How the Ground Truth for the Training Set Was Established
- The ground truth for the training set was established in the same manner as the ground truth for the test set: "Ground-truth data were initially generated using FreeSurfer (General Hospital Corporation, Boston, MA, USA, version 6.0) and verified and corrected by four radiologists."
- These 300 scans were collected from ten different MRI scanner types and included public datasets such as ADNI, IXI, PPMI, HCP, and AIBL.
Ask a specific question about this device
(81 days)
Neurophet SCALE PET is a software for the registration, fusion, display and analysis of medical images from multiple modalities including MRI and PET. The software aids clinician in the assessment and quantification of pathologies from PET Amyloid/FDG scans of the human brain. It enables automatic analysis and visualization of amyloid protein concentration through the calculation of standard uptake volume ratios (SUVR) within target regions of interest and comparison to those within the reference regions.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with radio-tracer and disease combinations.
Neurophet SCALE PET is a standalone software product that automatically calculates standardized uptake value ratios (SUVR) and provides quantified calculation results for quantitative analysis of FDG and Amyloid PET images. The calculation results are intended to aid clinicians in diagnosing patients' pathologies. Furthermore, the function provided to visualize the results of the analysis of the image is designed to help clinicians perform accurate visual interpretation.
Functions and workflow supported by the product are as follows: The user may set specified region as reference region, however, in order to achieve reliable constant count in the reference region, FDA recommends the selection of the pons or cerebellar white matter as reference region for assessment of SUVR in FDG PET imaging.
Because this product complies with the standard DICOM medical imaging protocol, it can be used by being linked with picture archive and communications systems (PACS).
Here's an analysis of the acceptance criteria and study details for the Neurophet SCALE PET device, based on the provided document:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Segmentation Accuracy | Dice Similarity Coefficient (DSC) of 86.39 ± 3.12% on major subcortical brain structures. |
SUVR Calculation Reliability | Intraclass Correlation Coefficient (ICC) > 0.6 for all Regions of Interest (ROI). |
Study Details
2. Sample size used for the test set and the data provenance
The document does not explicitly state the sample size used for the test set in the performance studies. It also does not specify the provenance of the data (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document states that "manual segmentation done by experts" was used for the segmentation accuracy evaluation. However, it does not specify the number of experts involved or their qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication method for the test set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for establishing the ground truth for the test set. It only mentions "manual segmentation done by experts."
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not reported in this document. The studies focus on the standalone performance of the device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance study was done. The Segmentation Accuracy and SUVR Calculation Reliability tests evaluate the algorithm's performance in comparison to expert manual segmentations and conventional PET processing tools, respectively. This assesses the device's capabilities without a human-in-the-loop directly interpreting the results or making a diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for:
- Segmentation Accuracy: "Manual segmentation done by experts." This indicates expert consensus-based ground truth.
- SUVR Calculation Reliability: Comparison against "conventional PET processing tools." While not explicitly "ground truth" in the clinical sense, it serves as a reference standard for reliability.
8. The sample size for the training set
The document does not provide any information regarding the sample size used for the training set.
9. How the ground truth for the training set was established
The document does not provide any information on how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1