Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K223268
    Device Name
    BrainInsight
    Manufacturer
    Date Cleared
    2022-12-16

    (53 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BrainInsight

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BrainInsight is intended for automatic labeling, spatial measurement, and volumetric quantification of brain structures from a set of low-field MR images and returns annotated and segmented images, color overlays and reports.

    Device Description

    BrainInsight is a fully automated MR imaging post-processing medical software that provides image alignment, whole brain segmentation, ventricle segmentation, and midline shift measurements of brain structures from a set MR images. The BrainInsight processing architecture includes a proprietary automated internal pipeline based on machine learning tools. The output annotated and segmented images are provided in standard image format using segmented color overlays and reports that can be displayed on third-party workstations and FDA-cleared Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in routine patient care as a support tool for clinicians in assessment of low-field (0.064 T) structural MRIs. BrainInsight provides overlays and reports based on 0.064 T 3D MRI series of T1 Gray/White, T2-Fast, and FLAIR images.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the BrainInsight™ device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria were defined based on non-inferiority testing, aiming for the model performance to be no worse than the average annotator's discrepancy.

    Midline Shift Discrepancy (Lower is Better)

    | Application | Modality | Acceptance Criteria (Model 2 to 12 years (20.6%), >12 to 18 to 90 years (70.6%)
    * Gender: 33% Female / 41% Male / 25% Anonymized
    * Pathology: Stroke (Infarct), Hydrocephalus, Hemorrhage (SAH, SDH, IVH, IPH), Mass/Edema, Tumor.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document states that the datasets for training and validation were annotated by "multiple experts." It then mentions that "The entire group of training image sets was divided into segments and each segment was given to a single expert." This phrasing is somewhat ambiguous for the test set specifically. It is implied that multiple experts were involved in the ground truth establishment for the overall process, but it doesn't clearly state how many experts independently evaluated each case in the test set, nor if the "single expert per segment" approach also applied to the test set ground truth.
    • Qualifications of Experts: Not specified beyond being referred to as "experts" and "annotators."

    4. Adjudication Method for the Test Set

    The adjudication method varies by application:

    • Midline Shift: Ground truth was determined based on the average shift distance of all annotators. This implies a form of consensus or averaging method rather than a strict adjudication by a senior expert.
    • Segmentation (Lateral Ventricles, Whole Brain): Ground truth for segmentation was calculated using Simultaneous Truth and Performance Level Estimation (STAPLE). STAPLE is an algorithm that estimates a "true" segmentation from multiple segmentations, weighting them based on their estimated performance. This is an algorithmic adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was a MRMC study done? No, a traditional MRMC comparative effectiveness study that measures how human readers improve with AI vs. without AI assistance was not explicitly described for this submission. The study focuses on standalone performance of the AI model against expert annotations and the "mean annotator" performance.
    • Effect Size of Human Improvement (if applicable): Not applicable, as an MRMC comparative effectiveness study was not detailed.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? Yes, the described performance evaluation appears to be a standalone (algorithm only) study. The device's performance is compared directly against the ground truth established by annotators, and against the mean discrepancy of the annotators themselves. There is no mention of human readers using the AI output to improve their performance compared to a baseline.

    7. Type of Ground Truth Used

    The type of ground truth used varies by the measurement:

    • Midline Shift: Expert consensus, calculated as the average shift distance of all annotators.
    • Segmentation (Lateral Ventricles, Whole Brain): Algorithmic consensus, calculated using Simultaneous Truth and Performance Level Estimation (STAPLE) based on expert annotations.
    • General: It is based on expert annotations of images acquired from the Hyperfine Swoop portable MRI system.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: The exact numerical sample size for the training set is not explicitly stated. The document only mentions that the data collection for the training and validation datasets was done at "multiple sites."

    9. How the Ground Truth for the Training Set Was Established

    • The data collection for the training and validation datasets was done at multiple sites.
    • The datasets were annotated by multiple experts.
    • The "entire group of training image sets was divided into segments and each segment was given to a single expert."
    • "The expert's determination became the ground truth for each image set in their segment." This implies a form of single-reader ground truth for each segmented batch, rather than multi-reader consensus for every single case within the training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220815
    Device Name
    BrainInsight
    Manufacturer
    Date Cleared
    2022-07-19

    (120 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BrainInsight

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BrainInsight is intended for automatic labeling, spatial measurement, and volumetric quantification of brain structures from a set of low-field MR images and returns annotated and segmented images, color overlans and reports.

    Device Description

    BrainInsight is a fully automated MR imaging post-processing medical software that provides image alignment, whole brain segmentation, ventricle segmentation, and midline shift measurements of brain structures from a set of MR images from patients ages 18 years or older. The BrainInsight processing architecture includes a proprietary automated internal pipeline based on machine learning tools. The output annotated and segmented images are provided in standard image format using segmented color overlays and reports that can be displayed on third-party workstations and FDA-cleared Picture Archive and Communications Systems (PACS).

    The modified BrainInsight described in this submission includes changes to the machine learning models to allow for the processing Al-reconstructed low-field MR images. The modified device also includes configuration updates and refactoring changes for incremental improvement.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the BrainInsight device, based on the provided text:

    BrainInsight Acceptance Criteria and Study Details

    1. Table of Acceptance Criteria and Reported Device Performance

    For Midline Shift:

    ApplicationAcceptance Criteria (Error Range)Reported Device Performance (Mean Absolute Error)
    Midline Shift"no worse than the average annotator discrepancy" (non-inferiority)T1 Error: 1.03 mm
    T2 Error: 0.97 mm

    For Lateral Ventricles and Whole Brain Segmentation (Dice Overlap):

    ApplicationAcceptance Criteria (Dice Overlap)Reported Device Performance (Dice Overlap [%])
    T1 Left Ventricle"no worse than the average annotator discrepancy" (non-inferiority)84
    T1 Right Ventricle"no worse than the average annotator discrepancy" (non-inferiority)82
    T1 Whole Brain"no worse than the average annotator discrepancy" (non-inferiority)95
    T2 Left Ventricle"no worse than the average annotator discrepancy" (non-inferiority)81
    T2 Right Ventricle"no worse than the average annotator discrepancy" (non-inferiority)79
    T2 Whole Brain"no worse than the average annotator discrepancy" (non-inferiority)96

    For Lateral Ventricles and Whole Brain Segmentation (Volume Differences):

    ApplicationAcceptance Criteria (Volume Differences)Reported Device Performance (Volume Differences [%])
    T1 Left Ventricle"no worse than the average annotator discrepancy" (non-inferiority)8
    T1 Right Ventricle"no worse than the average annotator discrepancy" (non-inferiority)7
    T1 Whole Brain"no worse than the average annotator discrepancy" (non-inferiority)3
    T2 Left Ventricle"no worse than the average annotator discrepancy" (non-inferiority)11
    T2 Right Ventricle"no worse than the average annotator discrepancy" (non-inferiority)19
    T2 Whole Brain"no worse than the average annotator discrepancy" (non-inferiority)5

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the numerical sample size for the test set. It mentions the distribution of categories:

    • Age: Min: 19, Max: 77
    • Gender: 59% Female / 41% Male
    • Pathology: Stroke (Infarct), Hydrocephalus, Hemorrhage (SAH, SDH, IVH, IPH), Mass/Edema, Tumor, Multiple sclerosis.

    Data Provenance: The images were acquired from "multiple sites" using the "FDA cleared Hyperfine Swoop Portable MR imaging system." It is implied to be retrospective as data collection occurred before the testing. The country of origin is not specified but is likely the US given the FDA submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The text states that "Ground truth for midline shift was determined based on the average shift distance of all annotators" and "Ground truth for segmentation is calculated using Simultaneous Truth and Performance Level Estimation (STAPLE)." It also mentions that "The datasets were annotated by multiple experts." However, the exact number of experts used for the test set's ground truth and their specific qualifications (e.g., "radiologist with 10 years of experience") are not explicitly stated.

    4. Adjudication Method for the Test Set

    The ground truth for midline shift was determined by the average shift distance of all annotators. For segmentation, the Simultaneous Truth and Performance Level Estimation (STAPLE) method was used. This implies a form of consensus-based adjudication, but not a strict numerical rule like 2+1 or 3+1. STAPLE is a probabilistic approach to estimate a true segmentation from multiple expert segmentations while simultaneously estimating the performance level of each expert.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document describes a standalone performance study of the algorithm against expert annotations, but does not mention a multi-reader multi-case (MRMC) comparative effectiveness study where human readers' performance with and without AI assistance is compared. Therefore, no effect size of human improvement with AI assistance is provided.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance study was conducted. The device's performance (Midline Shift, Dice Overlap, Volume Differences) was evaluated directly against a ground truth established by annotators, and the results were compared to the average annotator discrepancy to demonstrate non-inferiority. This is a standalone evaluation of the algorithm's performance.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus.

    • For midline shift, it was based on the "average shift distance of all annotators."
    • For segmentation, it was calculated using "Simultaneous Truth and Performance Level Estimation (STAPLE)" from multiple expert annotations.

    8. Sample Size for the Training Set

    The document does not explicitly state the numerical sample size for the training set. It only mentions that "Each model was trained using a training dataset to optimize parameters" and "The data collection for the training and validation datasets were done at multiple sites."

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established through expert annotation. The text states: "The datasets were annotated by multiple experts. The entire group of training image sets was divided into segments and each segment was given to a single expert. The expert's determination became the ground truth for each image set in their segment."

    Ask a Question

    Ask a specific question about this device

    K Number
    K202414
    Device Name
    BrainInsight
    Date Cleared
    2021-01-07

    (136 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BrainInsight

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BrainInsight is intended for automatic labeling, spatial measurement, and volumetric quantification of brain structures from a set of low-field MR images and returns annotated images, color overlays, and reports.

    Device Description

    BrainInsight is a fully automated MR imaging post-processing medical software that image alignment, whole brain segmentation, ventricle segmentation, and midline shift measurements of brain structures from a set of MR images from patients aged 18 or older. The output annotated and segmented images are provided in a standard image format using segmented color overlays and reports that can be displayed on third-party workstations and FDA cleared Picture Archive and Communications Systems (PACS). The high throughput capability makes the software suitable for use in routine patient care as a support tool for clinicians ir assessment of low-field (64mT) structural MRIs. BrainInsight provides overlays and reports based on 64mT 3D MRI series of a T1 and T2-weighted sequence. The outputs of the software are DICOM images which include volumes that have been annotated with color overlays, with each color representing a particular segmented region, spatial measurement of anatomical structures, and information reports computed from the image data, segmentations, and measurements. The BrainInsight processing architecture includes a proprietary automated internal pipeline that performs whole brain segmentation, ventricle segmentation, and midline shift measurements based on machine learning tools. Additionally, the system's automated safety measures include automated quality control functions, such as tissue contrast check and scan protocol verification. The system is installed on a standard computing platform, e.g. server that may be in the cloud, and is designed to support file transfer for input and output of results.

    AI/ML Overview

    The provided text describes the BrainInsight device and references its 510(k) summary (K202414). However, it does not contain specific acceptance criteria or a detailed study description with performance metrics, sample sizes, or ground truth establishment relevant to those criteria. The "Non-clinical Performance Data" section lists areas of evaluation but doesn't provide the results against specific criteria.

    Therefore, I cannot fulfill the request to provide a table of acceptance criteria and reported device performance based solely on the provided text.

    However, I can extract information related to the studies mentioned and other requested points:


    1. Table of Acceptance Criteria and Reported Device Performance

    • Not explicitly provided in the text. The document lists areas of non-clinical performance data (Cybersecurity and PHI protection, Midline shift, 3D Coordinates and alignment, Segmentation, Data Quality Control, Audit trail, User Manual information, Software control, Ventricle segmentation, Midline shift measurement, Skull stripping). However, it does not state specific acceptance criteria (e.g., "midline shift accuracy > X%") or the actual performance achieved against such criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not specified in the provided text.
    • Data Provenance: Not specified in the provided text. The device processes MRI scans from "Hyperfine FSE MRI scans acquired with specified protocols." Whether these were retrospective or prospective, or from specific countries, is not mentioned.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Not explicitly provided in the text. The document states that "Results must be reviewed by a trained physician," implying human review, but does not detail how ground truth for a test set was established (e.g., number of experts, their qualifications, or their role in defining ground truth for segmentation or measurement accuracy).

    4. Adjudication Method for the Test Set

    • Not explicitly provided in the text.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study mentioned. The document focuses on the device's standalone capabilities and its equivalence to a predicate. There is no mention of a study involving human readers with and without AI assistance or effect sizes.

    6. Standalone (Algorithm Only Without Human-in-the-loop) Performance

    • Yes, a standalone evaluation was performed. The "Non-clinical Performance Data" section describes software evaluations conducted to confirm various aspects like midline shift, 3D coordinates and alignment, segmentation, ventricle segmentation, and skull stripping. This indicates an assessment of the algorithm's performance independent of human input during the processing phase.

    7. Type of Ground Truth Used

    • Not explicitly provided in the text. The document describes the device's function (automatic labeling, spatial measurement, volumetric quantification, segmentation, midline shift measurements) and states "Performance data was limited to software evaluations to confirm...". While this implies comparison to some form of truth, the type of ground truth (e.g., expert consensus, manual tracings, pathology, outcomes data) for the segmentation, measurements, and other evaluated features is not detailed.

    8. Sample Size for the Training Set

    • Not explicitly provided in the text. The device uses "machine learning tools" for its processing architecture, indicating the use of a training set, but its size is not disclosed.

    9. How the Ground Truth for the Training Set Was Established

    • Not explicitly provided in the text. While it states machine learning is used, the process for establishing the ground truth labels or segmentations used to train these models is not detailed.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1