Search Results
Found 1 results
510(k) Data Aggregation
(141 days)
AI4CMR software is designed to report cardiac function measurements (ventricle volumes, ejection fraction, indices etc.) from 1.5T and 3T magnetic resonance (MR) scanners. AI4CMR uses artificial intelligence to automatically segment and quantify the different cardiac measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
The user incorporating AI4CMR into their DICOM application of choice is responsible for implementing a user interface.
AI4CMR v1.0 is a cloud-hosted service used with any third-party DICOM viewer application where the DICOM viewer serves as the user interface and the interface to a PACS or scanner for AI4CMR. AI4CMR is implemented as a plug-in to the DICOM viewer by the user and automatically processes and analyses cardiac MR images received by the DICOM viewer to quantify relevant cardiac function metrics and makes the information available to the user at the user's discretion.
The provided text describes the AI4CMR v1.0 device and its performance evaluation for FDA 510(k) clearance. Here's a breakdown of the acceptance criteria and study details:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics, primarily Intraclass Correlation Coefficient (ICC) and bias, tested against a "consensus" ground truth. The targets for these metrics demonstrating adequate agreement are not explicitly stated as numerical thresholds for acceptance, but are demonstrated by the presented results.
| Metric | Acceptance Criteria (Implicit from Study Design) | Reported Device Performance (Bias ± SD) | Reported Device Performance (ICC) |
|---|---|---|---|
| LV end-diastolic volume (EDV) | High agreement with expert consensus | 8.663 ± 17.28 mL | 0.990 |
| LV end-systolic volume (ESV) | High agreement with expert consensus | -2.893 ± 17.52 mL | 0.991 |
| LV ejection fraction (EF) | High agreement with expert consensus | 3.867 ± 5.74 % | 0.956 |
| LV myocardial mass | High agreement with expert consensus | 2.452 ± 18.04 g | 0.955 |
| RV end-diastolic volume (EDV) | High agreement with expert consensus | 8.355 ± 15.59 mL | 0.964 |
| RV end-systolic volume (ESV) | High agreement with expert consensus | -9.083 ± 13.83 mL | 0.953 |
| RV ejection fraction (EF) | High agreement with expert consensus | 7.802 ± 7.32 % | 0.814 |
| Myocardium Segmentation (DSC) | High segmentation overlap | 0.72 per image (average Dice Similarity Coefficient) | N/A |
Note: The Accuracy table on page 5 also lists "LV bias (std): 3.87 (5.74) LV ICC: 0.96" and "RV bias (std): 7.80 (7.32) RV ICC: 0.81" for EF, which align with the reported MRMC study results for EF.
Study Information
1. Sample sizes used for the test set and the data provenance:
* Clinical Performance Assessment (MRMC Study Test Set): 146 CMR cases.
* Provenance: Retrospective. Patient data was acquired from Siemens, GE, and Philips 1.5T scanners. The text does not explicitly state the country of origin for this specific 146-case dataset, but the training data was from Hospital de Braga, Portugal.
* Bench Testing (Standalone Performance Test Set): 15 CMR cases from the Society of Cardiac Magnetic Resonance (SCMR) Consensus Contour Data.
* Provenance: This is a publicly available consensus dataset. The origin of the cases themselves within the SCMR dataset is not specified in the document, but it includes data from various 1.5T and 3T scanners (Siemens, GE, Phillips).
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Clinical Performance Assessment (MRMC Study): 2 expert readers.
* Qualifications: "2 expert readers who achieved excellent interrater variability (ICC > 0.75)". Specific qualifications (e.g., radiologist, years of experience) are not stated beyond them being "expert readers."
* Bench Testing (SCMR Consensus Data): 7 independent expert readers.
* Qualifications: From "various core laboratories." Specific qualifications are not detailed beyond "expert readers."
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
* Clinical Performance Assessment (MRMC Study): The ground truth was established by 2 expert readers. They manually segmented the myocardium and determined volumes, LV mass, and Ejection Fraction. The "consensus" against which the device was compared appears to be derived from these two readers, as the text states, "The primary objective was to evaluate agreement between the AI4MED device and 2 expert readers who achieved excellent interrater variability." No explicit 2+1 or 3+1 adjudication process is described; it seems to be a consensus of the two experts.
* Bench Testing (SCMR Consensus Data): The SCMR Consensus Contour Data is inherently a consensus dataset established by 7 independent expert readers. The specific adjudication method (e.g., averaging, voting) used to create this consensus is not detailed here, but it implies a robust consensus approach.
4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* A Multi-Reader, Multi-Center (MRMC) retrospective study was performed.
* However, this was an agreement study, comparing the AI device's performance to that of human experts. It was not a comparative effectiveness study designed to assess how human readers improve with AI assistance vs. without it (i.e., a human-in-the-loop study). Therefore, no effect size of human reader improvement with AI assistance is provided or applicable from this specific study design.
5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Yes, a standalone performance test was done ("Bench Testing"). The AI4CMR performed segmentation and quantification (EDV, ESV, LVM, EF) on its own and its results were compared against the SCMR Consensus Contour Data.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Expert Consensus.
* For the MRMC study: Consensus of 2 expert readers.
* For the Bench Testing: Consensus of 7 expert readers (SCMR Consensus Contour Data).
7. The sample size for the training set:
* 824 anonymized cases were initially collected. This dataset was split:
* Training Set: 577 cases (70% of 824).
* Validation Set: 121 cases (15%).
* Test Set (internal, for model development): 126 cases (15%).
* Note: The clinical validation test set (146 cases) used for the MRMC study was independent from this training/validation/internal test split.
8. How the ground truth for the training set was established:
* The document states that the training data was "collected retrospectively from Hospital de Braga, Portugal." It implies that cardiac function measurements (ventricle volumes, ejection fraction, indices etc.) would have been part of the standard clinical reporting for these cases. However, the specific method for establishing the ground truth (e.g., manual segmentation by clinicians, expert review, consensus) for the training data is not explicitly detailed in the provided text. It is assumed to be derived from the clinical records or expert annotations used during the model's development phase.
Ask a specific question about this device
Page 1 of 1