K Number
K233731
Device Name
CardIQ Suite
Date Cleared
2024-08-01

(254 days)

Product Code
Regulation Number
892.1750
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

CardIQ Suite is a non-invasive software application designed to provide an optimized application to analyze cardiovascular anatomy and pathology based on 2D or 3D CT cardiac non contrast and angiography DICOM data from acquisitions of the heart. It provides capabilities for the visualization and measurement of vessels and visualization of chamber mobility. CardIQ Suite also aids in diagnosis and determination of treatment paths for cardiovascular diseases to include, coronary artery disease, functional parameters of the heart structures and follow-up for stent placement, bypasses and plaque imaging.

CardIQ Suite provides calcium scoring, a non-invasive software application, that can be used with non-contrasted cardiac images to evaluate calcified plaques in the coronary arteries, heart valves and great vessels such as the aorta. The clinician can use the information provided by calcium scoring to monitor the progression of calcium in coronary arteries over time, and this information may aid the clinician in their determination of the prognosis of cardiac disease.

Device Description

CardIQ Suite is a non-invasive software application designed to work with DICOM CT data acquisitions of the heart. It is a collection of tools that provide capabilities for generating measurement's both automatically and manually, displaying images and associated measurements in an easy-to-read format and tools for exporting images and measurements in a variety of formats.

CardIQ Suite provides an integrated workflow to seamlessly review calcium scoring and coronary CT angiography (CCTA) data. Calcium Scoring has the capability to automatically segment and label the calcifications within the coronary arteries, and then automatically compute a total and per territory calcium score. The calcium segmentation/labeling is using a new deep learning algorithm. The calcium scoring is based on the standard Agatston/Janowitz 130 (AJ 130) and Volume scoring methods for the segmented calcific regions. The software also provides the users a manual calcium scoring capability that allows them to edit (add/delete or update) auto scored lesions. It also allows the user to manually score calcific lesions within coronary arteries, aorta, aortic valve and mitral valve as well as other general cardiac structures. Calcium scoring offers quantitative results in the AJ 130 score, Volume and Adaptive Volume scoring methods.

Calcium Scoring results can be exported as DICOM SR to assist with integration into structured reporting templates. Images can be saved and exported for sharing with referring physicians, incorporating into reports and archiving as part of the CT examination.

The Multi-Planar Reformat (MPR) Cardiac Review and Coronary Review steps provide an interactive toolset for review of cardiac exams. Coronary CTA datasets can be reviewed utilizing the double oblique angles to visually track the path of the coronary arteries as well as to view the common cardiac chamber orientations. Cine capability for multi-phase data may be useful for visualization of cardiac structures in motion such as chambers, valves and arteries, automatic tracking and labeling will allow a comprehensive analysis of the coronaries. Distance measurement and ROI tools are available for quantitative evaluation of the anatomy.

AI/ML Overview

Based on the provided text, here is a description of the acceptance criteria and the study that proves the device meets them:

Device: CardIQ Suite (K233731)
Functionality being assessed: Automated Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline Tracking, and Coronary Artery Labeling (all utilizing new deep learning algorithms).


1. Table of Acceptance Criteria and Reported Device Performance

Feature / MetricAcceptance CriteriaReported Device Performance
Automated Outputs Acceptability (Reader Study)Acceptable by readers for greater than 90% of exams which had good image quality (based on Likert Scales and Additional Grading Scales).The automated outputs provided by the Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline tracking and Coronary Labeling algorithms incorporated in the subject device CardIQ Suite were scored to be acceptable by the readers for greater than 90% of the exams which had good image quality.
Algorithm Validation (Bench Testing)Algorithm successfully passes the defined acceptance criteria (specific criteria not detailed in the provided text, but implied for each of the four new deep learning algorithms: heart segmentation, coronary segmentation, coronary centerline tracking, and coronary labeling).The result of the algorithm validation showed that the algorithm successfully passed the defined acceptance criteria.

2. Sample size used for the test set and the data provenance

  • Test Set (Reader Study): A "sample of clinical CT images" was used. The exact number of cases is not specified.
  • Test Set (Bench Testing): A "database of retrospective CT exams" was used. The exact number of cases is not specified.
  • Data Provenance: The text does not explicitly state the country of origin. The bench testing data is described as "representative of the clinical scenarios where CardIQ Suite is intended to be used," suggesting it covers relevant acquisition protocols and clinical indicators. Both studies are retrospective ("retrospective CT exams" for bench testing and "sample of clinical CT images" for the reader study implying pre-existing data).

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: A "reader study evaluation was performed," indicating multiple readers. The exact number is not explicitly stated.
  • Qualifications of Experts: The text refers to them as "readers." Their specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed.

4. Adjudication method for the test set

The reader study used "Likert Scales and Additional Grading Scales" for evaluation. The specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing a definitive ground truth or resolving discrepancies among readers is not detailed in the provided text. Scores were "to be acceptable by the readers," implying individual reader agreement or perhaps a simple majority/threshold.


5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

Yes, a reader study was performed, which is a type of MRMC study in a sense.
The study did not directly measure human reader improvement with AI vs. without AI assistance quantitatively (e.g., AUC increase). Instead, it focused on the acceptability of the AI-generated outputs.
However, the conclusion states an perceived improvement in workflow efficiency: "Based on the reader study evaluation, we conclude that the automation of Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline Tracking and Coronary Artery Labeling provides an improvement in workflow efficiency when compared to the predicate and reference devices wherein these functionalities were performed manually by the user or using traditional algorithms."

The effect size (quantification of improvement) in terms of reader diagnostic performance is not provided, only the qualitative statement about workflow efficiency and the acceptability of the AI's outputs.


6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, a standalone evaluation of the algorithms was performed as "Engineering has performed bench testing for the four newly introduced deep learning algorithms... The result of the algorithm validation showed that the algorithm successfully passed the defined acceptance criteria." This bench testing implies an assessment of the algorithm's performance independent of human interaction for its specific outputs against some predefined criteria.


7. The type of ground truth used

  • For the Reader Study: The ground truth for evaluating the acceptability of the automated outputs was based on the "scores" given by human "readers" using Likert Scales and Additional Grading Scales. This is a form of expert consensus/reader assessment of the AI's output quality.
  • For the Bench Testing for Algorithm Validation: The text states "the algorithm successfully passed the defined acceptance criteria". While the exact nature of this "defined acceptance criteria" is not specified, it would typically involve comparing algorithm output to a reference standard, which could be expert-annotated ground truth, a pre-established gold standard, or other quantitative metrics. The document does not specify if it was pathology, outcomes data, or expert consensus. It likely involved expert-derived annotations or quantitative metrics on the retrospective CT exams.

8. The sample size for the training set

The sample size for the training set is not provided in the given text.


9. How the ground truth for the training set was established

The method for establishing ground truth for the training set is not provided in the given text.

§ 892.1750 Computed tomography x-ray system.

(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.