Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242062
    Device Name
    1CMR Pro
    Date Cleared
    2024-11-15

    (123 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    1CMR Pro is software that displays, analyses and transfers DICOM cardiovascular images acquired in Cardiovascular Magnetic Resonance (CMR) scanners, specifically structure, function and flow in the heart and major vessels using multi-slice, multi-parametric and velocity encoded CMR images. It is compatible with 1.5T and 3T CMR acquisitions.

    The intended patient population is both known healthy patients in whom an underlying cardiac disease is suspected. The standard viewing tools are indicated for all patients. The Al analysis components are not intended for use in patients with a known congenital cardiac abnormality, children (Age<18), or individuals with pacemakers (even if MRI compatible).

    Device Description

    1CMR Pro is software that displays, analyses and transfers DICOM cardiovascular images acquired in Cardiovascular Magnetic Resonance (CMR) scanners, specifically structure, function and flow in the heart and major vessels using multi-slice, multi-phase, multi-parametric and velocity encoded CMR images. It is compatible with 1.5T and 3T CMR acquisitions.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the 1CMR Pro device, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state 'acceptance criteria' in a formal table with pass/fail thresholds. Instead, it describes performance benchmarks which the device met or exceeded. The performance for 1CMR Pro was assessed against two primary benchmarks: human 'truthers' and other FDA-cleared software.

    Metric / Assessment TypeAcceptance Criteria (Implied)Reported Device Performance (1CMR Pro)
    DICE Scores (LV short axis contours)Expected to be comparable to or superior to human truthers.Overall DICE scores averaged 0.90, which was superior to Truther average of 0.89.
    Accuracy (14 variables: volumes, function, LV mass)Expected to pass all assessments and demonstrate accuracy comparable to or exceeding human truthers.Passed all assessments. Accuracy exceeded that of the truthers.
    Precision (LV variables - LVEF, LVmass, LVEDV, LVESV)Expected to be superior to human clinicians and prior FDA cleared software (lower Coefficient of Variation).Superior to humans for all measurements.
    LVEF Coefficient of Variation (CoV)Lower CoV than Clinician CoV.4.3±0.3% vs 7.0±0.6% (Clinician), p<0.001.
    LVmass Coefficient of Variation (CoV)Lower CoV than Clinician CoV.3.8±0.3% vs 4.6±0.3% (Clinician), p<0.001.
    LVEDV Coefficient of Variation (CoV)Lower CoV than Clinician CoV.4.9±0.4% vs 6.2±0.5% (Clinician), p<0.001.
    LVESV Coefficient of Variation (CoV)Lower CoV than Clinician CoV.5.4% (4.3-6.4%) vs 11.4% (6.5-15.6%) (Clinician), p=0.008.
    Comparison to other cleared software (Circle CVI v 5.13) (LVEF, LV mass, LVEDV CoV)Expected to exceed performance of other cleared software.Exceeded performance in every measured variable:- LVEF: 4.2% (95% CI: 3.5-5.0%) vs 10.4% (95% CI: 6.8-14.0%)- LV mass: 4.2% (95% CI: 3.5-5.0%) vs 10.4% (95% CI: 6.8-14.0%)- LVEDV: 5.4% (95% CI: 4.3-6.4%) vs 11.4% (95% CI: 6.5-15.6%)

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Accuracy Test Set: 64 adults. Data provenance is not explicitly stated beyond "images collected on either 1.5T or 3T scanners (range of Siemens, Philips, and GE)." It is implied to be retrospective as part of a validation dataset.
      • Precision Test Set: 110 adults, scanned twice. Data provenance similar to the accuracy test, implied retrospective.
      • Comparison to other cleared software Test Set: No specific sample size is provided for this comparison, but it likely used the same or a subset of the precision test set as the variables measured are similar.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: 3 independent US-based 'truthers'.
      • Qualifications: All had ">5 years experience." (Specific medical specialty e.g., radiologist, cardiologist, or CMR specialist is not stated, but implied to be relevant to cardiac imaging).
    3. Adjudication method for the test set: Not explicitly stated. The document mentions "3 independent US based truthers" indicating multiple expert readings but does not specify how discrepancies were resolved or if a consensus method (e.g., 2+1, 3+1) was used to establish the final ground truth. It seems each truther's reading contributed to a 'Truther DICE score average' and 'Clinician coefficient of variation' rather than establishing a single adjudicated ground truth for direct comparison.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done: Yes, a form of MRMC study was implicitly conducted for precision and DICE score comparison.

      • Effect size of how much human readers improve with AI vs without AI assistance: The study focused on the AI's standalone performance compared to human readers, rather than human readers with AI assistance. The results show 1CMR Pro's performance (DICE scores, accuracy, precision/CoV) was superior to human truthers/clinicians, suggesting that if AI assistance leads to similar performance as 1CMR Pro standalone, it would represent an improvement over unassisted human performance. For example, for LVEF CoV, the AI achieved 4.3% while clinicians achieved 7.0%.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, the entire validation described for accuracy and precision was for the 1CMR Pro algorithm in a standalone capacity ("no human editing" for the comparison with other cleared software). The comparison "Al versus Clinician" also reflects standalone AI performance against human performance.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The ground truth was established by "3 independent US based truthers" with >5 years experience. This indicates an expert consensus or expert-generated truth, specifically based on their interpretation and measurements of the CMR images. It is not based on pathology or clinical outcomes data.

    7. The sample size for the training set: Not provided in the document. The document only discusses "independent validation datasets."

    8. How the ground truth for the training set was established: Not provided in the document, as the training set details are omitted.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1