Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241922
    Device Name
    Myomics
    Manufacturer
    Date Cleared
    2025-02-28

    (242 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Myomics is intended to be used for viewing, post-processing, qualitative evaluation of cardiovascular magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) standard format. It enables a set of tools to assist physicians in qualitative assessment of cardiac images and quantitative measurements of the heart and adjacent vessels; and to view the presence or absence of physician-identified lesion in blood vessels. The target population for manual workflows of Myomics is not restricted; however, semi-automated machine learning algorithms of Myomics are intended for an adult population.

    The software comprises various analysis modules, including AI-powered algorithms, for a comprehensive evaluation of MR images.

    Myomics is used for cardiac images acquired from a 3.0 T MR scanner.

    Myomics shall be used only for cardiac images acquired from an MR scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.

    Device Description

    Myomics is a software application for analysis cardiovascular MR images in DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. This device has a graphical user interface which allows users to analyze cardiovascular MR images qualitatively and quantitatively.

    AI/ML Overview

    Based on the provided text, here's a detailed description of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriterionReported Device Performance
    Myocardium Segmentation Accuracy (DICE Score)All AI modules achieved an average DICE Score of over 0.7.
    Generalizability across MR machine manufacturersPerformance tested on 728 anonymized patient images from various major MR imaging device vendors, indicating generalizability.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 728 anonymized patient images were used for the AI performance test. This breaks down by AI module as follows:
      • Native T1 Map Myocardium Segmentation: 92 cases
      • Post T1 Map Myocardium Segmentation: 91 cases
      • T2 Map Myocardium Segmentation: 109 cases
      • CINE Myocardium Segmentation: 90 cases
      • LGE PSIR Myocardium Segmentation: 77 cases
      • CINE RV Myocardium Segmentation: 192 cases
      • LGE Magnitude Myocardium Segmentation: 77 cases
    • Data Provenance: The document states that the cases were "anonymized," implying patient privacy was maintained. No specific country of origin is mentioned. The data was "not utilized during the algorithm training process," indicating it was a separate test set. The study appears to be retrospective given the description of using existing anonymized patient images.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    The document does not explicitly state the number of experts used or their specific qualifications for establishing the ground truth for the test set. It mentions the "AI performance acceptance criteria, defined using the DICE Score," but doesn't detail how the reference standard (ground truth) for calculating these scores was generated (e.g., whether it was expert consensus manual segmentation).

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for the test set. It refers to the "DICE Score" as the evaluation metric, which implies a comparison against a pre-established ground truth without detailing an expert adjudication process specifically for the test data.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus is on the standalone performance of the AI modules against predefined metrics. There is no information provided about how much human readers improve with AI vs. without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, a standalone performance evaluation was done. The "Validation of AI Modules" section describes the testing of the "machine learning algorithms of Myomics" using a dedicated test set, focusing on the algorithm's performance in segmenting the myocardium (measured by DICE Score). This indicates an algorithm-only evaluation.

    7. The Type of Ground Truth Used

    The ground truth used for the AI performance evaluation appears to be based on the "segmenting the Myocardium" task, and the DICE Score is used to measure "similarity or overlap between two sets." This strongly implies that the ground truth consists of expert manual segmentations of the myocardium that the algorithm's output is compared against. However, the exact method for generating these ground truth segmentations (e.g., expert consensus, single expert, pathology confirmation) is not explicitly detailed.

    8. The Sample Size for the Training Set

    The total sample size for the training set was 3723 anonymized cases. This dataset was split into training, validation, and test sets at a ratio of 80%, 10%, and 10% respectively.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the "training involved a dataset of 3723 anonymized cases," and that it was "divided into training, validation, and test sets." While it mentions the purpose of the AI modules is "Myocardium Segmentation," it does not specify how the ground truth segmentations for these 3723 cases (or the training portion of them) were established. It's implied that these cases included the necessary ground truth labels for the machine learning algorithms to learn from, but the method (e.g., manual annotation by experts, semi-automated methods, etc.) is not described.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1