Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K232231
    Device Name
    QP-Brain®
    Manufacturer
    Date Cleared
    2023-12-13

    (139 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K173939, K192531

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QP-Brain® is a medical imaging processing application intended for automatic labeling and volumetric quantification of segmentable brain structures and white matter hyperintensities (WMH) from a set of adults and adolescents 18 and older MR images. Volumetric measurements may be compared to reference percentile data. The application with proper training, as a support tool in assessment of structural MRIs. Patient management decisions should not be results of the device.

    Device Description

    QP-Brain® is a medical image processing and analyzing software intended for image processing to analyze brain MR imaging studies. These brain MR images, when interpreted by clinicians with proper training, may yield clinically useful information.

    QP-Brain® is an automated MR imaging post-processing medical device software that uses 3D T1-weighted (T1w) Gradient Echo structural MRI scans to provide a quantitative imaging analysis and automatic segmentation of brain regions. If T2 FLAIR images are uploaded. QP-Brain® uses this sequence to automatically identify white matter hyperintensities using Artificial Intelligence.

    Once the T1 MR or T2 FLAIR has been uploaded, QP-Brain® will check the available sequences for compatibility before automatically launching the analysis.

    The output of the medical device consists of specific volumes with seqmentation overlay as well as different reports with quantitative information. The outputs can be returned to and displayed on third-party DICOM workstations and Picture Archive and Communication Systems (PACS).

    In case age and gender information is available on the study DICOM tags for brain structure analysis module, all quantified volumes are framed in a normative database to be compared with cognitively normal adults of the same age and gender.

    QP-Brain® also allows for longitudinal information reporting if a patient has acquired more than one MRI over time.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for QP-Brain®:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a set of predefined thresholds. Instead, it presents performance metrics for the device compared to a reference standard (manual expert segmentation). The implicit acceptance criteria appear to be the achievement of high correlation and low error rates, demonstrating that QP-Brain® functions as intended and is comparable to manual segmentation.

    Metric / RegionReported Device Performance (Mean (95% CI or Range))Implicit Acceptance Threshold (Inferred)
    Brain Volumetry (T1 MRI)
    GM DICE Score0.983 (0.981 – 0.986)High Dice Score (e.g., > 0.95 or similar to expert inter-rater variability)
    GM Relative Volume Difference2.846 (2.523 – 3.008)Low Relative Volume Difference (e.g., 0.8)

    Note: The acceptance thresholds above are inferred based on the context of demonstrating substantial equivalence and acceptable clinical utility. The document itself does not explicitly list these as "acceptance criteria" with specific numerical cutoffs that were pre-defined.

    2. Sample Size for Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated in the provided text. The tables present summarized metrics (e.g., mean DICE scores, mean errors, and confidence intervals), but the number of images or patients in the test set is not provided.
    • Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The document mentions "manual expert segmentation" as the reference standard but does not specify the number of experts involved.
    • Qualifications of Experts: Not specified. The document refers to them as "expert" but does not detail their specific qualifications (e.g., years of experience, board certifications).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document refers to "manual expert segmentation" as the reference standard, which implies that expert opinion was used, but the method for resolving any potential disagreements among multiple experts (e.g., 2+1, 3+1, or simple consensus) is not mentioned. If only one expert performed the segmentation, then no adjudication would be needed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The performance data presented compares the device's output to "manual expert segmentation" as a reference standard, not to human readers' performance with or without AI assistance. Therefore, it is a standalone performance study against a reference standard, not an MRMC comparative effectiveness study.
    • Effect size of human readers improvement: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only) Performance

    • Was a standalone performance study done? Yes. The "Performance Data" section describes how "QP-Brain® outputs were compared to manual expert segmentation (reference standard)." This directly assesses the algorithm's performance without human intervention after the algorithm has generated its output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (or expert segmentation). The document states, "For the performance evaluation, QP-Brain® outputs were compared to manual expert segmentation (reference standard)."

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not specified. The document only discusses the validation phase and its performance results. Information regarding the training set's size is not provided.

    9. How Ground Truth for the Training Set was Established

    • How Ground Truth for Training Set was Established: Not specified. While it's implied that "manual expert segmentation" would likely also be used for establishing ground truth in the training data, the document does not explicitly state this or provide details on the process for the training set.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1