K Number
K213253
Device Name
Pixyl.Neuro
Manufacturer
Date Cleared
2023-06-30

(638 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Pixyl.Neuro is intended for automatic labeling, visualization of segmentable brain structures and lesions from a set of MRI images. Volumetric measurements may be compared to reference percentile data.

Device Description

Pixyl.Neuro is a software application for the analysis of medical images of the brain. Specifically, the application takes as input MRI images and outputs brain region volumes and lesion volumes in a report format. The application is designed to be used by clinicians treating patients with a range of neurological disorders. The application can be used in the management of patients in a routine setting and in clinical research.

AI/ML Overview

Here's an analysis of the acceptance criteria and study detailed in the provided text for Pixyl.Neuro:

1. Table of Acceptance Criteria and Reported Device Performance

The text clearly states: "Relevant acceptance criteria have been set based on the results of a literature review for each type of experiment. All experiments passed the acceptance criteria." However, the specific numerical acceptance criteria are not explicitly provided in the excerpt. Instead, the reported device performance is given, which implicitly met the undisclosed acceptance criteria.

Metric (Pipeline)Reported Device Performance (Mean +/- Std Dev)Acceptance Criteria (Not explicitly stated in excerpt)
Accuracy (Dice Coefficient)
MS (Multiple Sclerosis)0.80 (+/- 0.06)(Implicitly met based on literature review)
FL (Fluid Attenuated Inversion Recovery)0.730 (+/- 0.10)(Implicitly met based on literature review)
BV (Brain Volume)0.84 (+/- 0.02)(Implicitly met based on literature review)
Reproducibility (Mean Absolute Volume Difference)
MS and FL Modules (Total Lesion Load)0.199 ml (+/- 0.193)(Implicitly met based on literature review)
BV Module (Mean across 20 brain structures)0.966 ml (+/- 1.098)(Implicitly met based on literature review)

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size: A total of 238 subject datasets were included in the experiments.
  • Data Provenance: The text states, "The device was tested upon subjects from the following groups: healthy subjects, multiple sclerosis patients, Alzheimer's patients, microangiopathy patients and white matter hyperintensities (of presumed vascular origin) patients." The country of origin is not specified, and it does not explicitly state whether the data was retrospective or prospective.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

This information is not provided in the excerpt. The text mentions "ground truth volumes or volume changes" but does not detail how this ground truth was established, including the number or qualifications of experts.

4. Adjudication Method for the Test Set

This information is not provided in the excerpt.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance?

No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with and without AI assistance was not mentioned in this excerpt. The study focuses on the standalone performance of the device (accuracy and reproducibility).

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done?

Yes, a standalone performance study was done. The performance data presented (Dice coefficients for accuracy, and mean absolute volume differences for reproducibility) directly assess the algorithm's performance without explicit mention of human-in-the-loop interaction for performance metrics. The safety section does note that "Results must be reviewed by a trained physician," indicating a human-in-the-loop for clinical use, but the reported performance metrics are for the algorithm's output itself.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

The text refers to "ground truth volumes or volume changes." While the method of establishing this ground truth (e.g., expert consensus, manual segmentation by experts, or another reference standard) is not explicitly stated, the comparison of segmentations to "ground truth" implies a reference standard was used for validation.

8. The Sample Size for the Training Set

This information is not provided in the excerpt. The text only mentions the sample size for the test set (238 subject datasets).

9. How the Ground Truth for the Training Set was Established

This information is not provided in the excerpt. The text does mention, "misalignments are generated on the training dataset for the algorithms to learn the variability of the alignment on the atlases," which implies a training dataset exists and has some form of ground truth or simulated variability, but the method of establishing this ground truth is not detailed.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).