K Number
K213998
Date Cleared
2022-07-28

(219 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

cvi42 Auto is intended to be used for viewing, post-processing, qualitative evaluation of cardiovasular magnetic resonance (MR) and computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.

It enables a set of tools to assist physicians in qualitative assessment of cardiac images and quantitative measurements of the heart and adjacent vessels: perform calcium scoring: and to confirm the presence of physician-identified lesion in blood vessels.

The target population for cvi42 Auto's manual workflows is not restricted; however, cvi42 Auto's semi-automated machine learning algorithms are intended for an adult population.

cvi42 Auto shall be used only for cardiac images acquired from an MR or CT scanner. It shall be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive decision-making process.

Device Description

cvi42 Auto is a software as a medical device (SaMD) that is intended for evaluating CT and MR images of the cardiovascular system. Combining digital image processing, visualization, guantification, and reporting tools, cvi42 Auto device is designed to support the physician in confirming the presence or absence of physician-identified lesion in blood vessels and evaluation, documentation and follow up of any such lesions.

cvi42 Auto uses machine learning techniques to aid in semi-automatic contouring of regions of interest of cardiac magnetic resonance (MR) or computed tomography (CT) images as follows:

    1. Cardiac Function: semi-automatic contouring of the four heart chambers (including left ventricle, left atrium, right ventricle, right atrium) in MR images.
    1. Calcium Assessment: using pixel intensity technique, identify calcified plaque in major coronary arteries in non-contrast enhanced CT images.
    1. Coronary Analysis: semi-automatic placement of centerline in coronary vessels to visualize the coronary arteries and assess stenosis in non-contrast enhanced CT images.

The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries. When selecting data for training, the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented. The separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets. As such, different scans from the same study were not split between the training and validation datasets. None of the cases used for model validation were used for training the machine learning models.

cvi42 Auto software has a graphical user interface which allows users to analyze cardiac images qualitatively and quantitatively for volume/mass, function and signal intensity changes including a reporting function.

The device can be integrated into a hospital, private practice environment, or medical research institution and provides clinical diagnosis decision support tools for the cardiovascular MR and CT technique.

Additionally, the software is designed to generate 3D view of the heart in CT images for qualitative assessment of the coronary artery. No quantitative assessment can be made from the 3D image.

The software does not interface directly with any data collection equipment; instead, the software uploads data files previously generated by such equipment. Its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen and can be saved within the software for future review.

AI/ML Overview

The provided text describes the acceptance criteria and the study that proves the cvi42 Auto Imaging Software Application meets these criteria.

Here's an organized breakdown of the requested information:


1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are described as pre-defined performance thresholds for the machine learning models. The reported performance is the achieved accuracy or error rate.

Feature / MetricAcceptance Criteria (Pre-defined)Reported Device Performance
CMR Function Analysis
Series Classification AccuracyDefined by True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN)97% - 100%
Volumetric Mean Absolute Error (MAE) for SAXNot explicitly stated but calculated.7% - 10%
Volumetric Mean Absolute Error (MAE) for LAXNot explicitly stated but calculated.5% - 9%
Calcium Analysis
Classification AccuracyDefined by TP, TN, FP, and FN86% - 99%
Coronary Analysis
Centerline Quality and PerformanceDefined by TP and FN82% - 94%
Mask PerformanceSuccess rate for relevant masks98% - 100%

Note: The document states that "All performance testing results met Circle's pre-defined acceptance criteria." While specific numerical "acceptance criteria" are not given for all metrics, the reported performance ranges are implicitly within the accepted thresholds.


2. Sample Size Used for the Test Set and Data Provenance

  • Total anonymized patient images for validation: n = 235
  • Breakdown by analysis type (note: total is >235 as some analyses might use overlapping sets or different views from the same patient):
    • Coronary Analysis: 70 samples
    • Calcium Analysis: 102 samples
    • SAX Function Contouring: 63 samples
    • 2-CV LAX Function Contouring: 63 samples
    • 3-CV LAX Function Contouring: 63 samples
    • 4-CV LAX Function Contouring: 63 samples
    • Function Classification: 252 samples
  • Data Provenance: "Across all MR and CT machine manufacturers." "At least 50% of the data came from a U.S. population." The data for validation was explicitly stated to not have been used during the development of the training algorithms, indicating a distinct test set. The document implies a retrospective collection of anonymized patient images for validation.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It only mentions that the device is "intended to be used by qualified medical professionals, experienced in examining and evaluating cardiovascular MR or CT images, for the purpose of obtaining diagnostic information as part of a comprehensive decision-making process." This likely refers to the users of the device, not necessarily the ground truth adjudicators for the validation study.


4. Adjudication Method for the Test Set

The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for establishing the ground truth on the test set. The results are presented as direct performance metrics against an assumed ground truth, but how that ground truth was derived (e.g., single expert, consensus) is not detailed.


5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to evaluate how human readers improve with AI vs. without AI assistance. The performance data presented focuses on the algorithm's standalone performance or its semi-automated function.


6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

Yes, a standalone study was done. The performance data provided (e.g., classification accuracies, MAE, centerline performance, mask performance) describes the performance of the machine learning algorithms themselves (the "semi-automated machine learning algorithms"), rather than human-AI team performance. The mention of "semi-automatic contouring" and "semi-automatic placement of centerline" implies that the AI assists, but the reported metrics appear to be related to the accuracy of the algorithm's output.


7. Type of Ground Truth Used

The type of ground truth used is not explicitly stated in detail for the validation set. Given the context of "semi-automatic contouring" and "classification accuracy," it is highly probable that the ground truth for contouring (e.g., for heart chambers) would have been established by expert manual segmentation, and for classifications (e.g., calcium presence), it would be based on expert review or established clinical criteria. However, explicit details like "expert consensus" or "pathology" are not mentioned.


8. The Sample Size for the Training Set

The document states: "The data used to train these machine learning algorithms were sourced from multiple clinical sites from urban centers and from different countries." However, the specific sample size (number of images or patients) used for the training set is not provided in the given text.


9. How the Ground Truth for the Training Set Was Established

The document mentions that training data was "sourced from multiple clinical sites" and that "the importance of model generalization was considered and data was selected such that a good distribution of patient demographics, scanner, and image parameters were represented." It also differentiates between training and validation datasets by ensuring "no overlap between the two sets."
While it broadly states that data was selected considering generalization, it does not explicitly detail how the ground truth for the training set was established (e.g., expert annotation, clinical reports, etc.).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).