Search Results
Found 1 results
510(k) Data Aggregation
(210 days)
The iCardio.ai CardioVision™ AI is an automated machine learning–based decision support system, indicated as a diagnostic aid for patients undergoing an echocardiographic exam consisting of a single PLAX view in an outpatient environment, such as a primary care setting.
When utilized by an interpreting clinician, this device provides information that may be useful in detecting moderate or severe aortic stenosis. iCardio.ai CardioVision™ AI is indicated in adult populations over 21 years of age. Patient management decisions should not be made solely on the results of the iCardio.ai CardioVision™ AI analysis. iCardio.ai CardioVision™ AI analyzes a single cine-loop DICOM of the parasternal long axis (PLAX).
The iCardio.ai CardioVision™ AI is a standalone image analysis software developed by iCardio.ai Corporation, designed to assist in the review of echocardiography images. It is intended for adjunctive use with other physical vital sign parameters and patient information, but it is not intended to independently direct therapy. The device facilitates determining whether an echocardiographic exam is consistent with aortic stenosis (AS), by providing classification results that support clinical decision-making.
The iCardio.ai CardioVision™ AI takes as input a DICOM-compliant, partial or full echocardiogram study, which must include at least one parasternal long-axis (PLAX) view of the heart and at least one full cardiac cycle. The device uses a set of convolutional neural networks (CNNs) to analyze the image data and estimate the likelihood of moderate or severe aortic stenosis. The output consists of a binary classification of "none/mild" or "moderate/severe," indicating whether the echocardiogram is consistent with moderate or severe aortic stenosis. In cases where the image quality is insufficient, the device may output an "indeterminate" result.
The CNNs and their thresholds are fixed prior to validation and do not continuously learn during standalone testing. These models are coupled with pre- and post-processing functionalities, allowing the device to integrate seamlessly with pre-existing medical imaging workflows, including PACS, DICOM viewers, and imaging worklists. The iCardio.ai CardioVision™ AI is intended to be used as an aid in diagnosing AS, with the final diagnosis always made by an interpreting clinician, who should consider the patient's presentation, medical history, and additional diagnostic tests.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for CardioVision™:
Acceptance Criteria and Reported Device Performance
| Metric | Acceptance Criteria | Reported Device Performance (without indeterminate outputs) | Reported Device Performance (including indeterminate outputs) |
|---|---|---|---|
| AUROC | Exceeds predefined success criteria | 0.945 | Not explicitly stated but inferred to be similar to Sensitivity/Specificity |
| Sensitivity | Exceeds predefined success criteria and predicate device | 0.896 (95% Wilson score CI: [0.8427, 0.9321]) | 0.876 (95% Wilson score CI: [0.8213, 0.9162]) |
| Specificity | Exceeds predefined success criteria and predicate device | 0.872 (95% Wilson score CI: [0.8384, 0.8995]) | 0.866 (95% Wilson score CI: [0.8324, 0.8943]) |
| PPV | Not explicitly stated as acceptance criteria | 0.734 (95% Wilson score CI: [0.673, 0.787]) | Not explicitly stated |
| NPV | Not explicitly stated as acceptance criteria | 0.955 (95% Wilson score CI: [0.931, 0.971]) | Not explicitly stated |
| Rejection Rate | Not explicitly stated as acceptance criteria | 1.077% (7 out of 650 studies) | 1.077% |
Note: The document explicitly states that the levels of sensitivity and specificity exceed the predefined success criteria and those of the predicate device, supporting the claim of substantial equivalence. While exact numerical thresholds for the acceptance criteria aren't provided in terms of specific values, the narrative confirms they were met.
Study Details
| Feature | Description |
|---|---|
| 1. Sample size used for the test set and the data provenance | Sample Size: 650 echocardiography studies from 608 subjects.Data Provenance: Retrospective, multi-center performance study from 12 independent clinical sites across the United States. |
| 2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts | Number of Experts: Not explicitly stated as a specific number, but referred to as "experienced Level III echocardiographers."Qualifications: "Experienced Level III echocardiographers." |
| 3. Adjudication method for the test set | Method: A "majority vote approach" was used in cases of disagreement among the experts. |
| 4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance | MRMC Study: No, an MRMC comparative effectiveness study is not detailed in this document. The study described is a standalone performance evaluation of the AI. (A "human factors validation study" was conducted to evaluate usability, where participants successfully completed the critical task of results interpretation without errors, but this is not an MRMC study comparing human performance with and without AI assistance on diagnostic accuracy). |
| 5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done | Standalone Performance Study: Yes, the document describes a "standalone study" with the primary objective to "evaluate the software's ability to detect aortic stenosis." The reported performance metrics (AUROC, Sensitivity, Specificity, etc.) are for the algorithm's performance alone. |
| 6. The type of ground truth used | Ground Truth Type: Expert consensus based on "echocardiographic assessments performed by experienced Level III echocardiographers," with a majority vote for disagreements. |
| 7. The sample size for the training set | Training Set Size: Not specified in the provided document. The document states, "No data from these [test set] sites were used in the training or tuning of the algorithm." |
| 8. How the ground truth for the training set was established | Training Set Ground Truth: Not explicitly detailed in the provided document. It can be inferred that similar methods (expert echocardiographic assessments) would have been used for training data, but the specifics are not provided. |
Ask a specific question about this device
Page 1 of 1