Search Results
Found 2 results
510(k) Data Aggregation
(242 days)
Myomics is intended to be used for viewing, post-processing, qualitative evaluation of cardiovascular magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) standard format. It enables a set of tools to assist physicians in qualitative assessment of cardiac images and quantitative measurements of the heart and adjacent vessels; and to view the presence or absence of physician-identified lesion in blood vessels. The target population for manual workflows of Myomics is not restricted; however, semi-automated machine learning algorithms of Myomics are intended for an adult population.
The software comprises various analysis modules, including AI-powered algorithms, for a comprehensive evaluation of MR images.
Myomics is used for cardiac images acquired from a 3.0 T MR scanner.
Myomics shall be used only for cardiac images acquired from an MR scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.
Myomics is a software application for analysis cardiovascular MR images in DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. This device has a graphical user interface which allows users to analyze cardiovascular MR images qualitatively and quantitatively.
Based on the provided text, here's a detailed description of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Myocardium Segmentation Accuracy (DICE Score) | All AI modules achieved an average DICE Score of over 0.7. |
Generalizability across MR machine manufacturers | Performance tested on 728 anonymized patient images from various major MR imaging device vendors, indicating generalizability. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 728 anonymized patient images were used for the AI performance test. This breaks down by AI module as follows:
- Native T1 Map Myocardium Segmentation: 92 cases
- Post T1 Map Myocardium Segmentation: 91 cases
- T2 Map Myocardium Segmentation: 109 cases
- CINE Myocardium Segmentation: 90 cases
- LGE PSIR Myocardium Segmentation: 77 cases
- CINE RV Myocardium Segmentation: 192 cases
- LGE Magnitude Myocardium Segmentation: 77 cases
- Data Provenance: The document states that the cases were "anonymized," implying patient privacy was maintained. No specific country of origin is mentioned. The data was "not utilized during the algorithm training process," indicating it was a separate test set. The study appears to be retrospective given the description of using existing anonymized patient images.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
The document does not explicitly state the number of experts used or their specific qualifications for establishing the ground truth for the test set. It mentions the "AI performance acceptance criteria, defined using the DICE Score," but doesn't detail how the reference standard (ground truth) for calculating these scores was generated (e.g., whether it was expert consensus manual segmentation).
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for the test set. It refers to the "DICE Score" as the evaluation metric, which implies a comparison against a pre-established ground truth without detailing an expert adjudication process specifically for the test data.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus is on the standalone performance of the AI modules against predefined metrics. There is no information provided about how much human readers improve with AI vs. without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance evaluation was done. The "Validation of AI Modules" section describes the testing of the "machine learning algorithms of Myomics" using a dedicated test set, focusing on the algorithm's performance in segmenting the myocardium (measured by DICE Score). This indicates an algorithm-only evaluation.
7. The Type of Ground Truth Used
The ground truth used for the AI performance evaluation appears to be based on the "segmenting the Myocardium" task, and the DICE Score is used to measure "similarity or overlap between two sets." This strongly implies that the ground truth consists of expert manual segmentations of the myocardium that the algorithm's output is compared against. However, the exact method for generating these ground truth segmentations (e.g., expert consensus, single expert, pathology confirmation) is not explicitly detailed.
8. The Sample Size for the Training Set
The total sample size for the training set was 3723 anonymized cases. This dataset was split into training, validation, and test sets at a ratio of 80%, 10%, and 10% respectively.
9. How the Ground Truth for the Training Set Was Established
The document states that the "training involved a dataset of 3723 anonymized cases," and that it was "divided into training, validation, and test sets." While it mentions the purpose of the AI modules is "Myocardium Segmentation," it does not specify how the ground truth segmentations for these 3723 cases (or the training portion of them) were established. It's implied that these cases included the necessary ground truth labels for the machine learning algorithms to learn from, but the method (e.g., manual annotation by experts, semi-automated methods, etc.) is not described.
Ask a specific question about this device
(200 days)
Myomics Q is intended to be used for viewing, post-processing and analysis of cardiac magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables:
- Importing cardiac MR images in DICOM format.
- Supporting clinical diagnostics by analysis of cardiac MR images using display functionality such as panning, windowing, zooming through series/slices of the images.
- Supporting clinical diagnostics analysis of the heart in cardiac MR images and signal intensity.
- Software package is designed to support the physician compliance assessment, document and follow up heart disease by cardiac MRI.
It shall be used by qualified medical professionals, experienced in examining cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. This device is a software application that can be used as a stand-alone product or in a network environment.
The target population for the device is not restricted, however the image acquisition by a cardiac MR scanner may limit the use of the device for certain sectors or the public.
Myomics Q is software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. This device has a graphical user interface which allows users to analyze cardiac MR Images qualitatively and quantitatively.
Here's a breakdown of the acceptance criteria and study details for the Myomics Q device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal acceptance criteria with specific thresholds for each performance metric. Instead, the performance tests verify the proper functioning of features and quantitative comparisons against a reference device within a certain margin. The implicit acceptance criterion for the quantitative comparisons is that the results should be "very similar" and fall within a ±5% deviation from the reference device.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Functional Verification | |
Proper installation of Myomics Q on appropriate OS (Window) | Passed (SPPT001) |
Import cardiac MR Images function working properly | Passed (SPPT002) |
Export cardiac MR Images function working properly | Passed (SPPT003) |
Patient information function working properly | Passed (SPPT004) |
Series overview function working properly | Passed (SPPT005) |
Contour drawing functions (Endocardium, Epicardium, Move, Pinch, Nudge, Curved Line, Free Hand, Smoothing, Undo, Redo, Restart, Delete, Confirm, Zooming, Panning, Windowing) working properly | Passed (SPPT006) |
T1 analysis function working properly (T1 Image or T1 Map display) | Passed (SPPT007) |
T2 analysis function working properly (T2 Image or T2 Map display) | Passed (SPPT008) |
LGE analysis function working properly (LGE Image display) | Passed (SPPT009) |
Quantitative Comparison (Implicit Acceptance Threshold: ≤ ±5% deviation from cvi42) | |
Results of Myomics Q are very similar to cvi42 in polar map report in Native T1 analysis | The results of Myomics Q did not show a difference of more than ±5% compared to the results of cvi42 (95% of cvi42 results |
Ask a specific question about this device
Page 1 of 1