K Number
K221463
Device Name
ElucidVivo A.3
Date Cleared
2022-06-17

(29 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ElucidVivo is a medical image analysis system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired with contrast from CT imaging devices. ElucidVivo is intended to assist trained physicians in the stratification of patients identified to have atherosclerosis. The software post processes images obtained using a multidetector CT. The package provides tools for the measurement and visualization (color coded maps) of arterial vessels.

Clinicians can select any artery to view the following anatomical references: the highlighted vessel in 3D, two rotatable curved MPR vessel views displayed at angles orthogonal to each other, and cross sections of the vessel. Cross-sectional measurements can be obtained using standard Elucid Vivo software measuring tools. Clinicians can semi-automatically determine contrasted lumen boundaries, stenosis measurements, and maximum lumen diameters. In addition, clinicians can edit lumen boundaries and examine Hounsfield unit or signal intensity statistics. Clinicians can also manually measure vessel length along the centerline in standard curved MPR views.

The measurements provided by ElucidVivo are not intended to provide a diagnosis or clinical recommendations. ElucidVivo is intended as a tool to complement standard of care.

Device Description

ElucidVivo is an image analysis software package for evaluating CT images of arterial vessels. It allows the processing, review, analysis, communication, and media interchange of multidimensional digital images acquired from CT scanners. ElucidVivo provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and pathology. The ElucidVivo software application user interface follows typical clinical workflow patterns to process, review, and analyze digital images.

AI/ML Overview

This document describes the acceptance criteria and the study that proves the device, ElucidVivo A.3, meets these criteria.

Acceptance Criteria and Device Performance

The acceptance criteria are implicitly defined by the analytical performance metrics presented for the device. The reported device performance is listed in the table below. The metrics include Bias, Intercept, Slope, RMSE (Root Mean Square Error), and wSD (weighted Standard Deviation) for structural measurements, and Difference, Slope, Intercept, RMSE, and wSD for compositional measurements. The values reported are the main estimated value along with a confidence interval in brackets.

CategoryMeasurementTested RangeReported Device Performance
StructureLumen Area (mm²)0.3 - 290.1Bias: 0.81mm² [0.3, 1.9], Intercept: 0.65mm² [-0.6, 0.9], Slope: 1.01 [0.9, 1.0], RMSE: 2.50 [1.30, 2.80], wSD: 2.30 [1.20, 2.60]
Wall Area (mm²)9.4 - 448.6Bias: 0.50mm² [-1.08, 1.29], Intercept: -0.59mm² [-4.1, 2.8.0], Slope: 1.0 [0.99, 1.04], RMSE: 4.10 [2.60, 6.60], wSD: 3.90 [2.40, 6.30]
Stenosis (%)33 - 69Bias: 6.10 [3.10, 8.90], Slope: 0.97 [0.83, 1.20], Intercept: 7.90 [-4.60, 15.00], RMSE: 7.00 [3.60, 11.00], wSD: 6.20 [3.20, 9.60]
Wall Thickness (mm)1.0 - 9.0Bias: 0.5mm [0.3, 0.6], Intercept: 0.27mm [-0.1, 0.5], Slope: 1.05 [1.01, 1.1], RMSE: 0.24 [0.17, 0.31], wSD: 0.21 [0.15, 0.28]
Plaque Burden (ratio)0.4 - 1.0Bias: -0.01 [-0.01, .004], Intercept: 0.01 [-0.1, 0.04], Slope: 0.99 [0.9, 1.1], RMSE: 0.018 [0.012, 0.038], wSD: 0.017 [0.012, 0.036]
CompositionCalcified Area (mm²)0 - 14Difference: -0.06 [-0.09, -0.03], Slope: 0.99 [0.91, 1.07], Intercept: -0.04 [-0.08, 0.01], RMSE: 1.46 [1.32, 1.59], wSD: 1.43 [1.31, 1.56]
LRNC Area (mm²)0 - 10Difference: 0.15 [0.10, 0.20], Slope: 0.92 [0.87, 0.96], Intercept: 0.34 [0.27, 0.42], RMSE: 2.79 [2.70, 2.89], wSD: 2.76 [2.67, 2.85]
Matrix Area (mm²)4 - 52Difference: 0.02 [-0.22, 0.26], Slope: 0.91 [0.88, 0.94], Intercept: 1.34 [1.00, 1.67], RMSE: 3.77 [3.64, 3.89], wSD: 3.58 [3.46, 3.69]

Study Details

  1. Sample size used for the test set and the data provenance:
    The document does not explicitly state the sample size (number of patients or cases) for the test set. It mentions "independent test data that had been blinded during the prior A.1.2 and preserved this release, has been used to avoid inadvertent bias associated with reuse of test data." The type of data includes "phantom and clinical images". The provenance is not specified by country of origin, but the data includes "ex vivo tissue specimens with paired CTA". The clinical images were retrospectively collected, given they were "preserved" from a prior release and used as "independent" and "blinded" test data.

  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • For tissue composition (Calcified Area, LRNC Area, Matrix Area), the ground truth for histopathology was established by three independent pathologists. Their specific qualifications (e.g., years of experience) are not detailed, but it's stated that their "agreement" was considered, accounting for "acknowledged discordance in histopathology interpretation".
    • For the positioning of annotated sections into 3D radiology volume, two independent radiologist users were involved. Their specific qualifications are not detailed, but their "differences in judgment on where the annotated section data applies within the in vivo volume" were accounted for.
  3. Adjudication method for the test set:

    • For histopathology ground truth, "three independent annotations were used for these results to account for acknowledged discordance in histopathology interpretation." This suggests a consensus or adjudicated approach, though the specific rule (e.g., majority vote, expert panel discussion) is not explicitly stated.
    • For positioning annotated sections, "four combinations resulting from two unique positioners crossed with two independent radiologist users were used for these results to account for differences in judgment." This implies a systematic evaluation of variability rather than a strict adjudication to a single ground truth.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in this document. The study focuses on analytic performance of the device itself (algorithm only) and variability related to human measurement using the device.

  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    Yes, the reported performance metrics (Bias, RMSE, Correlation) represent the standalone performance of the ElucidVivo A.3 algorithm in measuring anatomical structures and tissue composition compared to established ground truths derived from histopathology and expert annotations. The phrasing "Objectives evaluated included calculations of anatomic structure (interchangeability with manual measurements as well as inter- and intra-reader variability) and calculations of tissue characteristics (compared to histopathologic specimens representing ground truth as inter- and intra-reader variability)" indicates both standalone performance and performance associated with human interaction (though not a direct human-AI comparison as per MRMC).

  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For tissue composition measurements (Calcified Area, LRNC Area, Matrix Area), the ground truth was based on histopathology specimens ("histopathologic specimens representing ground truth... pathologist annotation of ex vivo tissue specimens") with expert interpretation.
    • For anatomic structure measurements, the ground truth was established through interchangeability with manual measurements and comparison to expert interpretations (radiologist users for 3D positioning).
    • Overall, the ground truth is described as "expert interpretation that the relevant scientific and clinical community relies upon for diagnosis or other specific categorization of the studied tissue."
  7. The sample size for the training set:
    The document does not provide the sample size for the training set.

  8. How the ground truth for the training set was established:
    The document does not detail how the ground truth for the training set was established, as it primarily focuses on the validation study using a separate, blinded test set.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).