K Number
K181939
Device Name
icobrain
Manufacturer
Date Cleared
2018-11-06

(110 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

icobrain is intended for automatic labeling, visualization and volumetric quantification of segmentable brain structures from a set of MR or NCCT images. This software is intended to automate the current manual process of identifying, labeling and quantifying the volume of segmentable brain structures identified on MR or NCCT images.

icobrain consists of two distinct image processing pipelines: icobrain cross and icobrain long.

icobrain cross is intended to provide volumes from MR or NCCT images acquired at a single time point. icobrain long is intended to provide changes in volumes between two MR images that were acquired on the same scanner, with the same image acquisition protocol and with same contrast at two different timepoints. The results of icobrain cross cannot be compared with the results of icobrain long.

Device Description

The input images can be MR images (current icobrain software - K161148 and K180326) or CT images. During the pre-processing, the modality and/or sequence of each scan is detected and each scan is converted from DICOM format to NIFTI format. The image processing then performs the actual segmentation and calculates the measurements of the brain structures and abnormalities. Finally, the computed measurements are summarized into an electronic report and (some) segmentations are overlaid on the input images, generating output images in DICOM format.

Since the processing of MR images remains unchanged compared to the currently approved icobrain software (see KI 6 I 148 and K180326), the remainder of this file will focus on the design of the software that processes CT images. We refer to the overall architecture focused on (pre)processing CT images as the CT pipeline.

AI/ML Overview

Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

Acceptance Criteria and Device Performance

Acceptance CriteriaReported Device Performance
Accuracy: Lesions, basal cisterns, lateral ventricles, and midline shift compared to manually segmented ground truth.Accuracy: For all experiments, the Pearson correlation coefficient between compared measurements was 0.95.
Accuracy: Lateral ventricles and whole brain volumes compared to MR images segmented by the cleared icobrain 3.0 software (taken as ground truth).Accuracy: For all experiments, the Pearson correlation coefficient between compared measurements was 0.95.
Reproducibility: Tested on CT images produced in the same scanning session.Reproducibility: For all experiments, the intraclass correlation coefficient was 0.94.
All experiments passed the acceptance criteria set by literature review.All experiments passed the acceptance criteria.

Study Details

  1. Sample sizes used for the test set and data provenance:

    • Test Set Sample Size: 544 subject datasets.
    • Data Provenance: The subjects included TBI patients and potential dementia patients. The specific country of origin is not explicitly stated, but the submission is from a Belgian company (icometrix NV). The study appears to be retrospective, using existing subject datasets.
  2. Number of experts used to establish the ground truth for the test set and their qualifications:

    • The document states that some ground truth was "manually segmented." However, it does not specify the number of experts, their qualifications (e.g., radiologist with X years of experience), or the specific process for this manual segmentation.
    • For lateral ventricles and whole brain volumes, the ground truth was "MR images segmented by the cleared icobrain 3.0 software." This implies a form of software-generated ground truth, rather than human expert ground truth for these specific metrics.
  3. Adjudication method for the test set:

    • The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It only mentions that lesions, basal cisterns, lateral ventricles, and midline shift were compared to "manually segmented ground truth." Without further detail, it's impossible to determine the adjudication method for the manual segmentation.
  4. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focuses on the standalone performance of the device against established ground truth and reproducibility, not on comparing human readers with and without AI assistance.
  5. If a standalone performance (algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study was done. The performance testing "demonstrate[s] the performance of the CT pipeline of icobrain 4.0" by validating its accuracy and reproducibility against ground truth. This is a direct assessment of the algorithm's performance without a human in the loop during the measurement generation.
  6. The type of ground truth used:

    • Expert Consensus/Manual Segmentation: For lesions, basal cisterns, lateral ventricles, and midline shift, the ground truth was established by "manually segmented ground truth." This implies human expert input, but the specifics of consensus are not detailed.
    • Software-generated Ground Truth (Predicate Device): For lateral ventricles and whole brain volumes, the ground truth was "MR images segmented by the cleared icobrain 3.0 software."
  7. The sample size for the training set:

    • The document does not provide the sample size for the training set. It only mentions the test set size of 544 subject datasets.
  8. How the ground truth for the training set was established:

    • The document does not specify how the ground truth for the training set was established, as the size and details of the training set are not provided. The technical characteristics state "segmentation by classical machine learning and deep learning (in our case supervised voxel classification with Convolutional Neural Networks)," which implies that a training set with established ground truth would have been necessary for supervised learning.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).