K Number
K232661
Date Cleared
2023-12-07

(98 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Myocardial Strain Software Application is intended for qualitative and quantitative evaluation of cardiovascular magnetic resonance (CMR) images. It provides measurements of 2D LV myocardial function (displacement, velocity, strain rate, time to peak, and torsion); these measurements are used by qualified medical professionals, experienced in examining and evaluating CMR images, for the purpose of obtaining diagnostic information for patients with suspected heart disease as part of a comprehensive diagnostic decision-making process.

Device Description

Circle's Myocardial Strain Software Application (Strain Module) is a software device that enables the analysis of CMR images acquired using SSFP cine imaging. It is designed to support physicians in the visualization, evaluation, and analysis of myocardial tissue deformation through CMR feature tracking. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications that physicians use. The Strain Module can be integrated into an image viewing software intended for visualization of cardiac images, such as Circle's FDA-cleared cvi42 software. The Strain Module does not interface directly with any data collection equipment, and its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen or can be saved for future review.

The Strain Module implements an algorithm for deformations modeling of topologies that relies on a two-dimensional (2D) version of the nearly incompressible deformable model. The deformation of the model is assumed to be completely determined by a set of control points placed on the middle curve of the myocardial wall; these points are first defined by the end-user in a reference phase, and then detected in all other phases based on the feature tracked boundaries and incompressibility constraint of the model. Once this feature tracking is complete, the Strain Module computes and reports various global and regional deformation quantities such as strains (including Global Longitudinal Strain (GLS) and Global Circumferential Strain (GCS)), strain rates, displacements, velocities, and torsion. These measurements of myocardial deformation can be made, as appropriate, in the radial, circumferential, or longitudinal directions.

AI/ML Overview

The provided text describes the Myocardial Strain Software Application (Strain Module) and its performance data, particularly in the context of its 510(k) submission to the FDA. However, it does not contain a specific table of acceptance criteria or a detailed breakdown of the study results that would allow for direct comparison for each criterion. It mentions that performance testing was conducted and references a Master Software Test Plan, but the actual, quantifiable acceptance criteria and the device's reported performance against them are not explicitly stated in the provided document.

Despite the lack of a direct table, I can extract and infer information about the validation and testing performed:

1. Table of Acceptance Criteria and Reported Device Performance:

As noted above, a specific table of acceptance criteria with corresponding performance metrics is not provided in the document. The text indicates that "Performance testing was conducted to verify compliance with specified design requirements" and that "The tracking performance and the clinically relevant Global Longitudinal and Global Circumferential strains were validated." It also mentions evaluation using "simple analytical phantoms" and "realistic phantoms with artificially imposed known deformation field and perturbations." This suggests that acceptance criteria would involve accuracy and precision for tracking performance and strain measurements, but the quantitative thresholds are not given.

The document states: "The computation of the deformation metrics from the tracked deformations were evaluated analytically." This implies that for certain computations, the device's output was compared to known mathematical solutions, and presumably met these.

2. Sample Size Used for the Test Set and Data Provenance:

  • Test Set Sample Size: Not explicitly stated as a number of cases or images. The text mentions "a combination of simple and realistic phantoms, real MRI data, and analytical solutions." It highlights that "the performance of the constrained tissue tracking algorithm was also compared to manual tracking in ES phase by three expert readers." This implies that at least some "real MRI data" was used.
  • Data Provenance: Not specified in terms of country of origin. The document mentions "real MRI data," which could be retrospective or prospective, but this detail is not provided.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

  • Number of Experts: "three expert readers."
  • Qualifications: "expert readers" – specific qualifications (e.g., years of experience, subspecialty) are not detailed in the provided text.

4. Adjudication Method for the Test Set:

The document states that the algorithm's performance "was also compared to manual tracking in ES phase by three expert readers." This suggests a comparison against individual expert readings rather than a consensus ground truth that required an adjudication method like 2+1 or 3+1. It doesn't describe a formal adjudication process for establishing a single ground truth from the three experts, but rather a comparison for validation.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

No MRMC comparative effectiveness study was mentioned. The study described focuses on technical performance validation rather than human-in-the-loop performance improvement. The document explicitly states: "No clinical studies were necessary to support substantial equivalence."

6. Standalone (Algorithm-Only) Performance:

Yes, a standalone performance assessment was conducted. The validation section describes evaluating "tracking performance," deformation fields on phantoms, and comparing the algorithm's output to manual tracking performed by experts. This indicates an assessment of the algorithm's performance independent of real-time human assistance in a diagnostic workflow.

7. Type of Ground Truth Used:

  • For phantoms: "artificially imposed known deformation field" and "simple analytical phantoms generated with variable input parameters," and computations "evaluated analytically." This indicates a synthetic/mathematical ground truth.
  • For real MRI data: "manual tracking in ES phase by three expert readers." This implies an expert-derived or expert-consensus ground truth, though the consensus method among the three experts is not detailed as noted in point 4.

8. Sample Size for the Training Set:

The document explicitly states that the Strain Module "does not involve any artificial intelligence (AI) or machine learning (ML)." Therefore, there is no training set in the traditional sense of machine learning, as the algorithm relies on a "purely mathematical" model for feature tracking and deformation quantity computation.

9. How Ground Truth for the Training Set Was Established:

As there is no AI/ML component described, the concept of a training set and its associated ground truth establishment is not applicable to this device as per the provided information. The algorithm's basis is described as a "two-dimensional (2D) version of the nearly incompressible deformable model," which indicates a model-based, rather than data-driven (learning-based), approach.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).