Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K203279
    Device Name
    Veuron-Brain-mN1
    Manufacturer
    Date Cleared
    2022-07-12

    (613 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Veuron-Brain-mN1 is intended for use in the post-acquisition image enhancement of 3T MR images of the brain acquired through a 3D gradient-echo sequence. When used in combination with other clinical information, the Veuron-Brain-mN1 application may aid the qualified radiologist with diagnosis by providing enhanced visualization of tissue structures with magnetic susceptibility contrasts in brain 3T MR images.

    Device Description

    Veuron-Brain mN1 is a post-processing software intended to provide visualization, manipulation and reconstruction capabilities, including susceptibility map-weighted images, of 3D gradient multi echo brain 3T MR images. The Veuron-Brain-mN1 aids in the clinical analysis of brain structures from MR images.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Veuron-Brain-mN1 device:

    Based on the provided text, there is no specific clinical study described that proves the device meets detailed acceptance criteria for diagnostic performance outcomes (e.g., sensitivity, specificity, accuracy). The document focuses on non-clinical performance and technological equivalence to a predicate device.

    Here's the breakdown of the information requested, based only on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state specific quantifiable acceptance criteria for diagnostic performance, nor does it provide a table of performance metrics (like sensitivity, specificity, or accuracy) derived from a clinical study.

    The text mentions:

    • "unit tests and integration tests were performed, and all results met the acceptance criteria."
    • "The predefined acceptance criteria were met to demonstrate substantial equivalence to the predicate."

    However, these "acceptance criteria" are related to system functionality, software verification and validation, and basic performance metrics like Contrast-to-Noise Ratio (CNR) and Signal-to-Noise Ratio (SNR) in phantom and clinical images, rather than clinical diagnostic accuracy.

    General Device Performance (from text):

    • Functionality: Unit and integration tests met acceptance criteria.
    • Safety: Demonstrated through meeting software verification and validation standards (ISO 14971, ISO 62304, IEC 62366).
    • Effectiveness (Non-clinical):
      • Evaluated using phantom testing representing the range of susceptibility values in brain tissue.
      • Evaluated on clinical images using CNR/SNR metrics.
      • Scanner models: Siemens Healthcare's and Philips's, 3T field strength.
      • Results supported that "all the system requirements have met their acceptance criteria and are adequate for its intended use."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a sample size for a "test set" in the context of a clinical performance study.

    • For the non-clinical performance evaluation mentioned: "The device performance was also evaluated on clinical images using CNR/SNR metrics."
      • Sample Size: Not specified.
      • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective). It only mentions "clinical images."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not mention the use of experts to establish ground truth for a clinical test set. The performance evaluation described is non-clinical (phantom, CNR/SNR on clinical images), not based on expert-derived ground truth for diagnostic accuracy.


    4. Adjudication Method for the Test Set

    Since no clinical test set with expert-established ground truth is described, there is no adjudication method mentioned.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not describe an MRMC comparative effectiveness study. It does not mention human readers improving with or without AI assistance.


    6. Standalone (Algorithm Only) Performance Study

    The description of the non-clinical performance ("The device performance was evaluated using phantom testing... Additionally, the device performance was also evaluated on clinical images using CNR/SNR metrics.") describes the standalone performance of the algorithm. However, this is primarily focused on image quality metrics (CNR/SNR) and system functionality, not diagnostic accuracy in a clinical context.


    7. Type of Ground Truth Used for Performance Evaluation

    For the non-clinical performance evaluation:

    • Phantom Testing: Ground truth is inherent in the known susceptibility values of the phantom (engineered truth).
    • Clinical Images with CNR/SNR: The "ground truth" here relates to objective image quality metrics (CNR, SNR), not a clinical diagnosis or pathology.

    8. Sample Size for the Training Set

    The document explicitly states, "The software algorithms are not based on machine learning." Therefore, there is no training set in the deep learning/machine learning sense.


    9. How the Ground Truth for the Training Set Was Established

    As the algorithms are not based on machine learning, there is no training set and thus no ground truth established for a training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1