(65 days)
Salix Coronary Plaque (V1.0.0) is a web-based, non-invasive software application that is intended to be used for viewing, post-processing, and analyzing cardiac computed tomography (CT) images acquired from a CT scanner in a Digital Imaging and Communications in Medicine (DICOM) Standard format.
This software provides cardiologists and radiologists with interactive tools that can be used for viewing and analyzing cardiac computed tomography (CT) data for quantification and characterization of coronary plaques (i.e. atherosclerosis), stenosis and to perform calcium scoring in non-contrast cardiac CT
Salix Coronary Plaque (V1.0.0) is intended to complement standard care as an adjunctive tool and is not intended as a replacement to a medical professional's comprehensive diagnostic decision-making process. The software's semi-automated features are intended for an adult population and should only be used by qualified medical professionals experienced in examining and evaluating cardiac CT images.
Users should be aware that certain views make use of interpolated data. These data are created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.
Salix Coronary Plaque (K251837) is a web-based software application, hosted on Amazon Web Services cloud computing services, delivered using a SaaS model. The software provides interactive, post-processing tools for trained radiologists or cardiologists for viewing, analyzing, and characterizing cardiac computed tomography (CT) image data obtained from a CT scanner. The physician-driven coronary analysis is used to review CT image data to prepare a standard coronary report that may include the presence and extent of physician-identified coronary plaques (i.e., atherosclerosis) and stenosis, and assessment of calcium score performed on a non-contrast cardiac CT scan. The Cardiac CT image data are physician-ordered and typically obtained from patients who underwent CCTA or CAC CT for evaluation of CAD or suspected CAD.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) Clearance Letter for Salix Coronary Plaque (V1.0.0):
Acceptance Criteria and Device Performance
Salix Coronary Plaque Output | Statistic | Reported Device Performance (Estimate [95% CI]) | Acceptance Criteria | Result |
---|---|---|---|---|
Vessel Level Stenosis | Percentage within one CAD-RADS category | 95.8% [94.1%, 97.3%] | 90% | Pass |
Total plaque | ICC3¹ | 0.96 [0.94, 0.98] | 0.70 | Pass |
Calcified plaque | ICC3¹ | 0.96 [0.90, 0.99] | 0.80 | Pass |
Noncalcified plaque | ICC3¹ | 0.91 [0.84, 0.95] | 0.55 | Pass |
Low attenuating plaque | ICC3¹ | 0.61 [0.41, 0.93] | 0.30 | Pass |
Calcium Scoring | Pearson Correlation | 0.958 [0.947, 0.966] | 0.90 | Pass |
Centerline Extraction | Overlap score | 0.8604 [0.8445, 0.8750] | 0.80 | Pass |
Vessel Labelling | F1 Score | 0.8264 [0.8047, 0.8479] | 0.70 | Pass |
Lumen Wall Segmentation | Dice Score | 0.8996 [0.8938, 0.9055] | 0.80 | Pass |
Vessel Wall Segmentation | Dice Score | 0.9016 [0.8962, 0.9070] | 0.80 | Pass |
¹ Intraclass correlation coefficient two-way mixed model ICC(3, 1) was used.
Study Details
1. A table of acceptance criteria and the reported device performance:
See table above.
2. Sample size used for the test set and the data provenance:
- Multi-reader Multi-case (MRMC) study for Plaque Volumes and CAD-RADS:
- Sample Size: 103 adult patients (58 women, 45 men; mean 61 ± 12 years, range 23–84).
- Data Provenance: Retrospective data from seven geographically diverse U.S. centers (Wisconsin, New York, Arizona, and Alabama). Self-reported race was 57% White, 22% Black or African American, 12% Asian, 2% American Indian/Alaska Native; 7% declined/unknown. 13% identified as Hispanic or Latino. Scans were acquired on contemporary 64-detector row or newer systems from Canon, GE, Philips, and Siemens, ensuring vendor diversity.
- Standalone Performance Validation for ML-enabled Outputs (Calcium Scoring, Centerline Extraction, Vessel Labelling, Lumen and Vessel Wall Segmentation):
- Sample Size:
- 302 non-contrast series for calcium scoring.
- 107 contrast-enhanced series for centerline extraction, vessel labeling, and wall segmentation.
- Data Provenance: Sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm. The validation dataset consisted of de-identified cardiac CT studies from seven (7) centers across four (4) US states. Included representation of multiple scanner manufacturers (Canon, GE, Philips, and Siemens) and disease severity based on calcium score and maximum stenosis (CAD-RADS classification) based on source clinical radiology reports.
- Sample Size:
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For Plaque Volumes and CAD-RADS (MRMC study):
- Number of Experts: Multiple (implied, at least two initial experts plus one adjudicator).
- Qualifications: Independent Level III-qualified (or equivalent experience) experts.
- For ML-enabled Outputs (Standalone Performance Validation):
- Number of Experts: Multiple (implied, by using "board certified cardiologists and radiologists").
- Qualifications: Board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience).
4. Adjudication method for the test set:
- For Plaque Volumes and CAD-RADS (MRMC study): Discrepancies between the initial expert readers were resolved by a third independent adjudicator with Level III qualifications or equivalent experience. This is a "2+1" adjudication method.
- For ML-enabled Outputs (Standalone Performance Validation): The ground truth was "independently established" by the experts from the source clinical image interpretation. The document does not specify an adjudication method for these specific tasks if there were multiple ground truth annotations.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC study was done.
- The study was not designed to measure the improvement of human readers with AI vs without AI assistance (i.e., a comparative effectiveness study of reader performance with and without the device).
- Instead, the MRMC study evaluated the performance of human readers using the Salix Coronary Plaque device compared to an expert ground truth. It states, "Eight U.S.-licensed radiologists and cardiologists... acted as SCP readers... They began with the device's standalone automated output and made any refinements they deemed necessary."
- The conclusion states: "This data supports our claim that qualified clinicians with minimal SCP specific training can achieve SCCT expert-level performance with SCP without the support of a core laboratory or specialized technician pre-read." This implies that the device enables a standard clinical user to achieve expert-level performance, but it does not quantify 'effect size' of improvement over their performance without the device.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance validation was done for "ML-enabled Salix Coronary Plaque outputs for calcium scoring, centerline extraction, vessel labelling, and lumen and vessel wall segmentation against reference ground truth." These results are presented in the "Reported Device Performance" table and were shown to meet or exceed acceptance criteria.
- The MRMC study also started with the "device's standalone automated output," suggesting that the algorithm's initial automated output was part of the workflow, though readers could refine it.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Expert consensus/annotation:
- For the MRMC study (plaque volumes and CAD-RADS), ground truth was established by "Independent Level III-qualified (or equivalent experience) experts [who] produced vessel-wall and lumen segmentations and assigned CAD-RADS stenosis categories." Discrepancies were adjudicated by a third expert.
- For the standalone ML-enabled outputs, ground truth was established by "board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience) using manual annotation and segmentation tools."
8. The sample size for the training set:
- The document states that the validation data was "sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm."
- However, the actual sample size used for the training set for Salix Coronary Plaque (V1.0.0) is not provided in the given text.
9. How the ground truth for the training set was established:
- This information is not provided in the given text for the Salix Coronary Plaque device. While it mentions how ground truth for the test set was established, it does not detail the process for the training data (nor the training data size).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).