(105 days)
CardIQ Suite is a non-invasive software application designed to provide an optimized application to analyze cardiovascular anatomy and pathology based on 2D or 3D CT cardiac non contrast and angiography DICOM data from acquisitions of the heart. It provides capabilities for the visualization and measurement of vessels and visualization of chamber mobility. CardIQ Suite also aids in diagnosis and determination of treatment paths for cardiovascular diseases to include, coronary artery disease, functional parameters of the heart, heart structures and follow-up for stent placement, bypasses and plaque imaging. CardIQ Suite provides calcium scoring, a non-invasive software application, that can be used with non-contrasted cardiac images to evaluate calcified plaques in the coronary arteries, heart valves and great vessels such as the aorta. Calcium Scoring may be used to monitor the progression of calcium in coronary arteries overtime, which may aid in the prognosis of cardiac disease.
CardlQ Suite is a non-invasive software application designed to work with DICOM CT data acquisitions of the heart. It is a collection of tools that provide capabilities for generating measurement's both automatically and manually, displaying images and associated measurements in an easy-to-read format and tools for exporting images and measurements in a variety of formats.
CardIQ Suite provides an integrated workflow to seamlessly review calcium scoring and coronary CT angiography (CCTA) data. Calcium Scoring has the capability to automatically segment and label the calcifications within the coronary arteries, and then automatically compute a total and per territory calcium score. The calcium segmentation/labeling is using algorithm. The calcium scoring is based on the standard Agatston/Janowitz 130 (AJ 130) and Volume scoring methods for the segmented calcific regions. The software also provides the users a manual calcium scoring capability that allows them to edit (add/delete or update) auto scored lesions. It also allows the user to manually score calcific lesions within coronary arteries, aorta, aortic valve as well as other general cardiac structures. Calcium scoring offers quantitative results in the AJ 130 score, Volume and Adaptive Volume scoring methods.
Calcium Scoring results can be exported as DICOM SR to assist with integration into structured reporting templates. Images can be saved and exported for sharing with referring physicians, incorporating into reports and archiving as part of the CT examination.
CardlQ Suite provides the Coronary 2D Review toolset which allows interactive review of cardiac exams. Coronary CTA datasets can be reviewed utilizing the double oblique angles to visually track the path of the coronary arteries as well as to view the common cardiac chamber orientations. Cine capability for multi-phase data may be useful for visualization of cardiac structures such as chambers, valves and arteries in motion. Distance measurement and ROI tools are available for quantitative evaluation of the anatomy.
Here's a breakdown of the acceptance criteria and study details for the CardIQ Suite, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
The algorithm successfully passed the defined acceptance criteria for automatically segmenting, labeling, and scoring calcific regions in coronary arteries. | The validation study demonstrated that the algorithm successfully passed these defined acceptance criteria. (Specific quantitative metrics for "successful passing" are not detailed in this summary, but the clinical testing section implies a qualitative assessment of "very high correlations" with manual methods). |
Equivalent performance to the predicate device (SmartScore 4.0) for computing total calcium score. | "Very high correlations were found between manual and automated methods for computing the total calcium score, demonstrating equivalent performance of the CardIQ Suite software to the predicate device SmartScore 4.0." |
Study Details
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The summary mentions "a representative set of clinical sample images" for the clinical testing. A specific number is not provided.
- Data Provenance: The study used a "database of retrospective CT exams." The country of origin is not specified. It is likely internal data from GE Medical Systems SCS or collaborating institutions.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Three.
- Qualifications of Experts: "Three board certified radiologists." Specific experience (e.g., years) is not mentioned.
4. Adjudication method for the test set
- The summary states that "Three board certified radiologists manually scored a representative set of clinical sample images using the predicate device." It then compares these manual scores to the automated scores. This implies that the manual scores (generated by the three radiologists using the predicate device) served as the reference standard or ground truth against which the automated algorithm's performance was compared. There is no explicit mention of an adjudication method like 2+1 or 3+1 among the radiologists themselves to reach a single consensus before comparison with the AI.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study with human readers improving with AI assistance vs. without AI assistance was not explicitly described.
- The clinical study performed was a comparison of the algorithm's standalone performance against manual scoring performed by radiologists using the predicate device. It did not evaluate human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone algorithm performance study was done for the Calcium Scoring algorithm. The study compared the automated score outputted by CardIQ Suite with manual scores provided by radiologists using the predicate device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth used was expert manual scoring (by three board-certified radiologists) using the predicate device (SmartScore 4.0). This serves as a form of expert consensus or reference standard for comparison.
8. The sample size for the training set
- The summary does not provide a specific sample size for the training set. It mentions that the algorithm was validated using "a database of retrospective CT exams" which is "representative of the clinical scenarios" but does not differentiate between training and test sets by size.
9. How the ground truth for the training set was established
- The summary does not explicitly detail how the ground truth for the training set was established. It only describes the validation process (testing set) where radiologists manually scored images. It is generally understood that for deep learning algorithms, training data would also require some form of expert-annotated ground truth, but this document does not provide those specifics for the training phase.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.