Search Results
Found 3 results
510(k) Data Aggregation
(202 days)
Combinostics Oy
cNeuro™ cDAT is intended for use by Nuclear Medicine or Radiology practitioners and referring physicians for display, processing, and reporting of Nuclear Medicine Imaging data.
cNeuro™ cDAT enables visual evaluation and quantification of ioflupane I 123 (DaTscan™) images. The software enables automated quantification of tracer uptake and comparison with the corresponding tracer uptake in healthy subjects as provided by normal population databases of ioflupane I 123 (DaTscan™) images. cNeuro™ cDAT assists in detection of loss of functional dopaminergic neuron terminals in the striatum, which is correlated with Parkinson disease or Dementia with Lewy Bodies (DLB).
cNeuro™ cDAT has not been demonstrated to improve ioflupane I 123 reader performance for distinguishing positive from negative patients. This device should not be used to deviate from ioflupane I 123 dosing and administration instructions. Refer also to ioflupane I 123 prescribing information for instructions.
cNeuro™ cDAT is Software as a Medical Device (SaMD) intended to aid physicians in the evaluation of the loss of functional dopaminergic neuron terminals in the striatum through the quantification of ioflupane I 123 (DaTscan™) images. cNeuro™ cDAT is a fully automated image analysis software tool that provides tools for viewing DaTscan™ images and quantification of tracer uptake in the striatum with comparison to reference data from healthy controls. The results are summarized in a PDF-report.
cNeuro™ cDAT quantifies DaTscan™ brain images and computes Striatal Binding Ratios (SBRs) for different volumes of interest (VOIs). SBRs are computed by subtracting the uptake in a background VOI from the tracer uptake in the target VOI and then dividing this value with uptake in a background VOI. Results are compared with normative values in a reference database and z-scores are presented.
cNeuro™ cDAT quantifies the data by registering the images to a template where VOIs are defined. Quantification results are summarized in PDF-reports that are sent to the organization's PACS. cNeuro™ cDAT also offers interactive review of the DaTscan™ images and the quantification results in a browser-based viewer.
Here's a breakdown of the acceptance criteria and study details based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Performance Requirement) | Reported Device Performance (cNeuro™ cDAT) |
---|---|
Percentage of patients with paired z-value outputs differing by >0.5 from the predicate device | Ranged from 5.4% to 7.6% for the left and right putamen, caudate, anterior putamen, and posterior putamen. This implies 92.4% to 94.6% of patients had z-value outputs within ±0.5 of the predicate. |
Compliance with DICOM Standard (NEMA PS 3.1 - 3.20 (2021)) | Complies with NEMA PS 3.1 - 3.20 (2021) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard. |
Adequate quality of displayed images and other software functions to not present a hazardous situation. | Failure or latent flaw of software functions would not present a hazardous situation with a probable risk of death or serious injury. (This is a safety assessment rather than a specific performance metric, but it functions as an acceptance criterion for safety). |
2. Sample Size for Test Set and Data Provenance
- Sample Size for Test Set: Imaging from 370 patients was initially available. A subset of 48 images could not be processed with the predicate device and were excluded. Therefore, the effective test set size used for comparison was 322 patients (370 - 48).
- Data Provenance: The imaging data was obtained from third-party clinical investigations (NCT01952678, NCT01141023). The text does not explicitly state the country of origin, but clinical trial identifiers common in the US suggest it could be US-based or international. The data is retrospective as it was available following these completed clinical investigations.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of those Experts
The document does not provide information on the number or qualifications of experts used to establish ground truth for the test set. The performance testing was a comparison against a predicate device's output (z-values), not against a separate expert-defined ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method for the test set in the context of expert review. As noted above, the testing involved a direct comparison of z-value outputs between the subject device and the predicate device.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted to assess human reader improvement with AI assistance. The Indications for Use section explicitly states: "cNeuro™ cDAT has not been demonstrated to improve ioflupane I 123 reader performance for distinguishing positive from negative patients."
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance study was done. The performance testing described involved the cNeuro™ cDAT software (the algorithm) independently processing patient images and generating z-value outputs, which were then compared to the predicate device's z-value outputs. There is no mention of a human-in-the-loop component in this specific performance comparison.
7. Type of Ground Truth Used
The "ground truth" for the performance comparison was the output values (z-scores) generated by the predicate device (DaTQUANT Application / GE Medical Systems, LLC.) for the same patient images. This is a comparison against an established, legally marketed device's quantitative outputs, rather than a clinical ground truth (like pathology or outcomes data) directly determining disease status. The device "assists in detection of loss of functional dopaminergic neuron terminals," but the performance study focused on agreement with the predicate's quantification.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set used to develop the cNeuro™ cDAT algorithm. It mentions that "Data from DaTscan™ studies of healthy controls is used to define a reference normal database," which is part of the device's functionality, but this is distinct from the training data for the image processing and quantification algorithm itself.
9. How Ground Truth for the Training Set Was Established
The document does not provide details on how the ground truth for the training set was established for the image processing and quantification algorithm. It does mention that the "Normal Database" used for comparison within the device is defined from "Data from DaTscan™ studies of healthy controls." This normal database would have its own "ground truth" definition (i.e., these individuals were determined to be healthy controls), but this isn't directly the ground truth for training the core quantification algorithm.
Ask a specific question about this device
(106 days)
Combinostics Oy
cNeuro cPET aids physicians in the evaluation of patient pathologies via assessment and quantification of PET brain scans.
The software aids in the assessment of human brain PET scans enabling automated analysis through quantification of tracer uptake and comparison with the corresponding tracer uptake in normal subjects. The resulting quantification is presented using volumes of interest and voxel-based maps of the brain. cNeuro cPET allows the user to generate information relative changes in PET-FDG glucose metabolism.
cNeuro cPET additionally allows the user to generate information relative changes in PET brain amyloid load between a subject's images and a normal database, which may be the result of brain neurodegeneration.
PET co-registration and fusion display capabilities with MRI allow PET findings to be related to brain anatomy.
cNeuro cPET aids physicians in the image interpretation of PET studies conducted on patients being evaluated for cognitive impairment, or other causes of cognitive decline.
cNeuro cPET has been developed to aid clinicians in the assessment and quantification of pathologies derived from PET scans. The software enables the display, co-registration, and fusion of PET images with those from MRI. Additionally, cPET enables automated quantitative and statistical analysis of tracer uptake by registration of a volume-of-interest atlas to the PET images and by comparing voxel and region-based uptake with corresponding uptake in healthy, amyloid-negative subjects. There are two quantification pipelines, one when the patient's MRI is not available (PET-only) and one when the patient's MRI is available (PET-MR). Quantification results are presented using volumes of interest, voxel-based or 3D stereotactic surface projection maps of the brain.
The provided text describes the regulatory filing for cNeuro cPET (K231576), a device intended to aid physicians in the evaluation and quantification of PET brain scans for pathologies, particularly related to FDG glucose metabolism and amyloid load.
Here's an analysis of the acceptance criteria and the study proving the device meets these criteria, based on the provided information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the performance evaluation conducted, demonstrating the device's accuracy, robustness, and agreement with established methods and a predicate device. While explicit "acceptance criteria" values are not presented in a table form, the performance summary details the results proving the device meets its intended use and is substantially equivalent to its predicate.
Performance Metric / Acceptance Criteria (Implied) | Reported Device Performance (cNeuro cPET) |
---|---|
Agreement with Predicate Device (CortexID Suite) | Correlations (R) ranging from 0.98 – 0.99 depending on cohort. |
Agreement with FreeSurfer based PET-MR quantification | Strong correlation (R) of 0.98 – 0.99. |
Test-Retest Variability (TRT) - Flutemetamol (Pons reference) | PET-MR: 1.2% |
PET-only: 1.7% | |
Test-Retest Variability (TRT) - Florbetapir (Whole cerebellum reference) | PET-MR: 1.3% |
PET-only: 1.8% | |
Agreement with SoT (Histopathology) - Florbetaben Categorization | PET-only: 94.4% |
PET-MR: 96.3% | |
Agreement with SoT (Majority Visual Read) - Florbetaben Categorization | PET-only: 94.8% |
PET-MR: 92.8% | |
Agreement with SoT (Unanimous Visual Read) - Florbetaben Categorization | PET-only: 98.8% |
PET-MR: 98.2% | |
Correlation of Centiloids with Centiloid Project values | R$^2$ ranging from 0.85 - 0.99 depending on tracer and reference region. |
Compliance with DICOM standard | Complies with NEMA PS 3.1 - 3.20 (2021) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology). |
2. Sample Size and Data Provenance
- Test Set Sample Size: 2,275 subjects.
- Data Provenance: The text explicitly mentions "a large dataset from the Alzheimer's Disease Neuroimaging Initiative (ADNI)", indicating a combination of retrospective and potentially prospective data, likely from diverse geographical origins given ADNI's multi-national nature, although specific countries are not stated. It also refers to Florbetaben Phase III data. The type of tracers supported (Flutemetamol, Florbetaben, FDG) were used in the testing.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: For the majority visual read ground truth, 5 readers were used.
- Qualifications: The text does not explicitly state the qualifications of these readers (e.g., "radiologist with 10 years of experience"). However, given the context of PET brain scan interpretation, it can be inferred they are medical professionals specializing in nuclear medicine or radiology.
4. Adjudication Method for the Test Set
- For the visual read ground truth for Florbetaben categorization: A "majority visual read" by 5 readers was used. This implies a 3 out of 5 agreement for positive/negative classification.
- Additionally, a subset of scans where there was "unanimous categorization by the readers" (all 5 readers agreed) was analyzed separately.
- For histopathology-based ground truth, no adjudication method is relevant as it's a direct pathological diagnosis.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance is compared.
- The study focuses on the standalone performance of the cNeuro cPET software and its agreement with established methods (predicate, FreeSurfer) and ground truth (histopathology, expert consensus). While expert visual reads were used to establish ground truth, they were not used in a comparative "human-in-the-loop" study format.
6. Standalone Performance (Algorithm Only)
- Yes, a standalone performance study was clearly conducted. The reported performance metrics (correlation with predicate, test-retest variability, agreement with histopathology and expert consensus) are all measures of the algorithm's performance independent of human-in-the-loop assistance. The device output includes quantitative metrics (SUVR, Z-scores, Centiloids) and maps for physician interpretation, but the performance testing itself is on the software's ability to generate these outputs accurately.
7. Type of Ground Truth Used
The study utilized multiple types of ground truth:
- Expert Consensus: "Majority visual read" by 5 readers for Florbetaben scan categorization.
- Pathology: "Histopathology" for Florbetaben scan categorization.
- Reference Methods/Data:
- Comparison with the predicate device (CortexID Suite).
- Comparison with a FreeSurfer based PET-MR quantification method used in ADNI.
- Validation against Centiloid Project values (a standardized method for quantitative amyloid plaque estimation).
8. Sample Size for the Training Set
- The document does not specify the sample size used for the training set. The 2,275 subjects are explicitly stated as the dataset for "testing of cNeuro cPET". Without further information, it's unclear if parts of this dataset were used for initial training or if a separate, larger training set was used. It states that the "ADNI" dataset was used for "assessment of robustness," which might imply testing, but ADNI datasets are also often used for model development.
9. How Ground Truth for Training Set Was Established
- Since the training set size is not provided, the method for establishing its ground truth is also not explicitly described. However, based on the performance evaluation, it is highly probable that similar methods (expert consensus, established reference methods, or potentially a form of "weak" or "noisy" labeling for initial training) would have been employed. The document mentions the device performs "automated analysis through quantification of tracer uptake and comparison with the corresponding tracer uptake in normal subjects," which implies the use of a "normal database" that would have required some form of ground truth or normative data establishment during development.
Ask a specific question about this device
(248 days)
Combinostics Oy
cNeuro cMRI is intended for automatic labeling, quantification of segmentable brain structures from a set of MR images. The software is intended to automate the current manual process of identifying, labeling and quantifying the segmentable brain structures identified on MR images.
The intended user profile covers medical professionals who work with medical imaging. The intended operational environment is an office-like environment with a computer.
cNeuro cMRI is intended for automatic labeling, quantification of segmentable brain structures from a set of MR images. The software is intended to automate the current manual process of identifying, labeling and quantifying the segmentable brain structures identified on MR images.
As input, cNeuro cMRI uses T1-weighted (T1) and fluid-attenuated inversion recovery (FLAIR) DICOM MR images from a single time point. The T1 image is mandatory but the FLAIR image is optional. The user selects images through connection with a Picture Archiving and Communication System (PACS) or by selecting DICOM files from a folder. cNeuro cMRI displays the selected images together with information extracted from the DICOM headers.
lmage processing starts with a pre-processing stage with bias-field correction and brain extraction before the actual segmentation and calculation of MRI biomarkers begins. When the processing has completed, the user can review the images with brain segmentations displayed as an overlay. cNeuro cMRI presents computed biomarkers corresponding to volumes of structures and FLAIR white matter hyperintensities. The computed biomarkers are corrected for the subject's head size, gender and age and are compared to corresponding biomarkers from a healthy reference population using a statistical model.
Here's a detailed breakdown of the acceptance criteria and study information for the cNeuro cMRI device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document explicitly states that "A literature review was performed to set relevant acceptance criteria for each type of experiment. All experiments passed the acceptance criteria." While the specific numerical acceptance criteria from the literature review are not detailed in the provided text, the reported device performance for key metrics is given.
Metric | Acceptance Criteria (Not explicitly stated numerically in source, but "passed") | Reported Device Performance |
---|---|---|
Similarity Index (Dice Index) - Hippocampus | Value from literature review | 0.88 |
Similarity Index (Dice Index) - Thalamus | Value from literature review | 0.91 |
Similarity Index (Dice Index) - Whole Cortex | Value from literature review | 0.88 |
Intraclass Correlation Coefficient (ICC) - Test-Retest Reproducibility (averaged over 133 structures) | Value from literature review | 0.96 |
Correlation Coefficient - FLAIR White Matter Hyperintensities (vs. manually labeled) | Value from literature review | 0.97 |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 1399 subjects in total.
- Data Provenance: The test data included "data from healthy subjects, and patients with neurodegenerative diseases such as Alzheimer's disease, mild cognitive impairment, fronto-temporal lobe degeneration, vascular dementia as well as Multiple Sclerosis patients." The country of origin is not specified, nor is whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document mentions "manually labeled ground truth data" and "manually labelled data" for white matter hyperintensities. However, it does not specify the number of experts used or their qualifications (e.g., radiologists with X years of experience).
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set. It only mentions "manually labeled ground truth data."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned in the provided text. The study focused on validating the accuracy and reproducibility of the automated system against ground truth, not on human reader performance with or without AI assistance.
6. Standalone Performance Study
Yes, a standalone study was done. The performance metrics reported (Similarity Index, ICC, Correlation Coefficient) directly reflect the algorithm's performance in automatically segmenting and quantifying brain structures in comparison to "manually labeled ground truth data" or test-retest data, without direct human-in-the-loop interaction for the reported metrics. The "QC of segmentation results" and "reviewing biomarkers" in the workflow indicate a human review step, but the reported performance metrics are for the algorithmic output.
7. Type of Ground Truth Used
The ground truth used was expert consensus / manual labeling. Specifically, the document states:
- "In the accuracy experiments, cNeuro cMRI fully automated brain segmentation was compared to manually labeled ground truth data."
- "and the correlation coefficient between the computed FLAIR white matter hyperintensities and the manually labelled data was 0.97."
8. Sample Size for the Training Set
The document does not specify the sample size used for the training set. The 1399 subjects are mentioned in the context of "experiments," which typically implies testing or validation rather than training.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1