Search Results
Found 2 results
510(k) Data Aggregation
(242 days)
Myomics is intended to be used for viewing, post-processing, qualitative evaluation of cardiovascular magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) standard format. It enables a set of tools to assist physicians in qualitative assessment of cardiac images and quantitative measurements of the heart and adjacent vessels; and to view the presence or absence of physician-identified lesion in blood vessels. The target population for manual workflows of Myomics is not restricted; however, semi-automated machine learning algorithms of Myomics are intended for an adult population.
The software comprises various analysis modules, including AI-powered algorithms, for a comprehensive evaluation of MR images.
Myomics is used for cardiac images acquired from a 3.0 T MR scanner.
Myomics shall be used only for cardiac images acquired from an MR scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.
Myomics is a software application for analysis cardiovascular MR images in DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. This device has a graphical user interface which allows users to analyze cardiovascular MR images qualitatively and quantitatively.
Based on the provided text, here's a detailed description of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Myocardium Segmentation Accuracy (DICE Score) | All AI modules achieved an average DICE Score of over 0.7. |
| Generalizability across MR machine manufacturers | Performance tested on 728 anonymized patient images from various major MR imaging device vendors, indicating generalizability. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 728 anonymized patient images were used for the AI performance test. This breaks down by AI module as follows:
- Native T1 Map Myocardium Segmentation: 92 cases
- Post T1 Map Myocardium Segmentation: 91 cases
- T2 Map Myocardium Segmentation: 109 cases
- CINE Myocardium Segmentation: 90 cases
- LGE PSIR Myocardium Segmentation: 77 cases
- CINE RV Myocardium Segmentation: 192 cases
- LGE Magnitude Myocardium Segmentation: 77 cases
- Data Provenance: The document states that the cases were "anonymized," implying patient privacy was maintained. No specific country of origin is mentioned. The data was "not utilized during the algorithm training process," indicating it was a separate test set. The study appears to be retrospective given the description of using existing anonymized patient images.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
The document does not explicitly state the number of experts used or their specific qualifications for establishing the ground truth for the test set. It mentions the "AI performance acceptance criteria, defined using the DICE Score," but doesn't detail how the reference standard (ground truth) for calculating these scores was generated (e.g., whether it was expert consensus manual segmentation).
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for the test set. It refers to the "DICE Score" as the evaluation metric, which implies a comparison against a pre-established ground truth without detailing an expert adjudication process specifically for the test data.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus is on the standalone performance of the AI modules against predefined metrics. There is no information provided about how much human readers improve with AI vs. without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance evaluation was done. The "Validation of AI Modules" section describes the testing of the "machine learning algorithms of Myomics" using a dedicated test set, focusing on the algorithm's performance in segmenting the myocardium (measured by DICE Score). This indicates an algorithm-only evaluation.
7. The Type of Ground Truth Used
The ground truth used for the AI performance evaluation appears to be based on the "segmenting the Myocardium" task, and the DICE Score is used to measure "similarity or overlap between two sets." This strongly implies that the ground truth consists of expert manual segmentations of the myocardium that the algorithm's output is compared against. However, the exact method for generating these ground truth segmentations (e.g., expert consensus, single expert, pathology confirmation) is not explicitly detailed.
8. The Sample Size for the Training Set
The total sample size for the training set was 3723 anonymized cases. This dataset was split into training, validation, and test sets at a ratio of 80%, 10%, and 10% respectively.
9. How the Ground Truth for the Training Set Was Established
The document states that the "training involved a dataset of 3723 anonymized cases," and that it was "divided into training, validation, and test sets." While it mentions the purpose of the AI modules is "Myocardium Segmentation," it does not specify how the ground truth segmentations for these 3723 cases (or the training portion of them) were established. It's implied that these cases included the necessary ground truth labels for the machine learning algorithms to learn from, but the method (e.g., manual annotation by experts, semi-automated methods, etc.) is not described.
Ask a specific question about this device
(200 days)
Myomics Q is intended to be used for viewing, post-processing and analysis of cardiac magnetic resonance (MR) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format. It enables:
- Importing cardiac MR images in DICOM format.
- Supporting clinical diagnostics by analysis of cardiac MR images using display functionality such as panning, windowing, zooming through series/slices of the images.
- Supporting clinical diagnostics analysis of the heart in cardiac MR images and signal intensity.
- Software package is designed to support the physician compliance assessment, document and follow up heart disease by cardiac MRI.
It shall be used by qualified medical professionals, experienced in examining cardiovascular MR images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. This device is a software application that can be used as a stand-alone product or in a network environment.
The target population for the device is not restricted, however the image acquisition by a cardiac MR scanner may limit the use of the device for certain sectors or the public.
Myomics Q is software application for evaluating cardiovascular images in a DICOM Standard format. The software can be used as a stand-alone product that can be integrated into a hospital or private practice environment. This device has a graphical user interface which allows users to analyze cardiac MR Images qualitatively and quantitatively.
Here's a breakdown of the acceptance criteria and study details for the Myomics Q device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal acceptance criteria with specific thresholds for each performance metric. Instead, the performance tests verify the proper functioning of features and quantitative comparisons against a reference device within a certain margin. The implicit acceptance criterion for the quantitative comparisons is that the results should be "very similar" and fall within a ±5% deviation from the reference device.
| Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|
| Functional Verification | |
| Proper installation of Myomics Q on appropriate OS (Window) | Passed (SPPT001) |
| Import cardiac MR Images function working properly | Passed (SPPT002) |
| Export cardiac MR Images function working properly | Passed (SPPT003) |
| Patient information function working properly | Passed (SPPT004) |
| Series overview function working properly | Passed (SPPT005) |
| Contour drawing functions (Endocardium, Epicardium, Move, Pinch, Nudge, Curved Line, Free Hand, Smoothing, Undo, Redo, Restart, Delete, Confirm, Zooming, Panning, Windowing) working properly | Passed (SPPT006) |
| T1 analysis function working properly (T1 Image or T1 Map display) | Passed (SPPT007) |
| T2 analysis function working properly (T2 Image or T2 Map display) | Passed (SPPT008) |
| LGE analysis function working properly (LGE Image display) | Passed (SPPT009) |
| Quantitative Comparison (Implicit Acceptance Threshold: ≤ ±5% deviation from cvi42) | |
| Results of Myomics Q are very similar to cvi42 in polar map report in Native T1 analysis | The results of Myomics Q did not show a difference of more than ±5% compared to the results of cvi42 (95% of cvi42 results < Myomics Q Results < 105% of cvi42 results). This deviation is attributed to user contour accuracy and is not considered to affect clinical performance. (SPPT010) |
| Results of Myomics Q are very similar to cvi42 in polar map report in Post T1 analysis | The results of Myomics Q did not show a difference of more than ±5% compared to the results of cvi42 (95% of cvi42 results < Myomics Q Results < 105% of cvi42 results). This deviation is attributed to user contour accuracy and is not considered to affect clinical performance. (SPPT011) |
| Results of Myomics Q are very similar to cvi42 in polar map report in T2 analysis | The results of Myomics Q did not show a difference of more than ±5% compared to the results of cvi42 (95% of cvi42 results < Myomics Q Results < 105% of cvi42 results). This deviation is attributed to user contour accuracy and is not considered to affect clinical performance. (SPPT012) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify the sample size (number of cases or images) used for the performance tests (SPPT010, SPPT011, SPPT012). It only refers to "the results" and "data."
- Data Provenance: The document does not provide information on the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- The document does not mention the use of experts to establish ground truth for the test set. The quantitative performance tests (SPPT010, SPPT011, SPPT012) compare Myomics Q's results against those of the cvi42 software (the reference device/control group), which itself is a cleared product. The discussion indicates that a "user contour accuracy" within the software impacts the results, implying the ground truth for comparison is derived from the cvi42 output based on how contours are drawn.
4. Adjudication Method for the Test Set
- No adjudication method is mentioned or implied, as the comparison is primarily between the subject device (Myomics Q) and a reference software (cvi42). The variability is explicitly linked to "how the contour is drawn," suggesting a single user, or an assumed consistent user, performed the contouring on both systems for comparison.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done. The study described is a bench test comparing the subject device's performance to a reference software, not comparing human readers with and without AI assistance.
6. Standalone Performance (Algorithm Only Without Human-in-the-Loop Performance)
- Yes, a standalone performance assessment was conducted for Myomics Q. The "Performance Test" section (7.3) and "Discussion" (7.4) describe "bench tests" where Myomics Q's output is compared directly to cvi42's output. The document explicitly states, for quantitative comparison, that "the quantitative is depending on the user contour accuracy," suggesting the user interaction (drawing contours) is part of the process that leads to the final quantitative results being compared. However, the evaluation is of the software's calculation capabilities post-contouring, rather than a human-in-the-loop scenario. The device itself is a "software application" for analysis.
7. Type of Ground Truth Used
- The "ground truth" for the quantitative performance comparisons (SPPT010, SPPT011, SPPT012) appears to be the results obtained from the cvi42 software (the reference device), which served as the control group in the bench tests.
8. The Sample Size for the Training Set
- The document does not provide any information regarding the sample size used for the training set. This is a post-market notification for a new device, and the data provided focuses on validation rather than training details.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide any information on how the ground truth for the training set was established, as details about the training set itself are absent.
Ask a specific question about this device
Page 1 of 1