Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221463
    Device Name
    ElucidVivo A.3
    Date Cleared
    2022-06-17

    (29 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ElucidVivo is a medical image analysis system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired with contrast from CT imaging devices. ElucidVivo is intended to assist trained physicians in the stratification of patients identified to have atherosclerosis. The software post processes images obtained using a multidetector CT. The package provides tools for the measurement and visualization (color coded maps) of arterial vessels.

    Clinicians can select any artery to view the following anatomical references: the highlighted vessel in 3D, two rotatable curved MPR vessel views displayed at angles orthogonal to each other, and cross sections of the vessel. Cross-sectional measurements can be obtained using standard Elucid Vivo software measuring tools. Clinicians can semi-automatically determine contrasted lumen boundaries, stenosis measurements, and maximum lumen diameters. In addition, clinicians can edit lumen boundaries and examine Hounsfield unit or signal intensity statistics. Clinicians can also manually measure vessel length along the centerline in standard curved MPR views.

    The measurements provided by ElucidVivo are not intended to provide a diagnosis or clinical recommendations. ElucidVivo is intended as a tool to complement standard of care.

    Device Description

    ElucidVivo is an image analysis software package for evaluating CT images of arterial vessels. It allows the processing, review, analysis, communication, and media interchange of multidimensional digital images acquired from CT scanners. ElucidVivo provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and pathology. The ElucidVivo software application user interface follows typical clinical workflow patterns to process, review, and analyze digital images.

    AI/ML Overview

    This document describes the acceptance criteria and the study that proves the device, ElucidVivo A.3, meets these criteria.

    Acceptance Criteria and Device Performance

    The acceptance criteria are implicitly defined by the analytical performance metrics presented for the device. The reported device performance is listed in the table below. The metrics include Bias, Intercept, Slope, RMSE (Root Mean Square Error), and wSD (weighted Standard Deviation) for structural measurements, and Difference, Slope, Intercept, RMSE, and wSD for compositional measurements. The values reported are the main estimated value along with a confidence interval in brackets.

    CategoryMeasurementTested RangeReported Device Performance
    StructureLumen Area (mm²)0.3 - 290.1Bias: 0.81mm² [0.3, 1.9], Intercept: 0.65mm² [-0.6, 0.9], Slope: 1.01 [0.9, 1.0], RMSE: 2.50 [1.30, 2.80], wSD: 2.30 [1.20, 2.60]
    Wall Area (mm²)9.4 - 448.6Bias: 0.50mm² [-1.08, 1.29], Intercept: -0.59mm² [-4.1, 2.8.0], Slope: 1.0 [0.99, 1.04], RMSE: 4.10 [2.60, 6.60], wSD: 3.90 [2.40, 6.30]
    Stenosis (%)33 - 69Bias: 6.10 [3.10, 8.90], Slope: 0.97 [0.83, 1.20], Intercept: 7.90 [-4.60, 15.00], RMSE: 7.00 [3.60, 11.00], wSD: 6.20 [3.20, 9.60]
    Wall Thickness (mm)1.0 - 9.0Bias: 0.5mm [0.3, 0.6], Intercept: 0.27mm [-0.1, 0.5], Slope: 1.05 [1.01, 1.1], RMSE: 0.24 [0.17, 0.31], wSD: 0.21 [0.15, 0.28]
    Plaque Burden (ratio)0.4 - 1.0Bias: -0.01 [-0.01, .004], Intercept: 0.01 [-0.1, 0.04], Slope: 0.99 [0.9, 1.1], RMSE: 0.018 [0.012, 0.038], wSD: 0.017 [0.012, 0.036]
    CompositionCalcified Area (mm²)0 - 14Difference: -0.06 [-0.09, -0.03], Slope: 0.99 [0.91, 1.07], Intercept: -0.04 [-0.08, 0.01], RMSE: 1.46 [1.32, 1.59], wSD: 1.43 [1.31, 1.56]
    LRNC Area (mm²)0 - 10Difference: 0.15 [0.10, 0.20], Slope: 0.92 [0.87, 0.96], Intercept: 0.34 [0.27, 0.42], RMSE: 2.79 [2.70, 2.89], wSD: 2.76 [2.67, 2.85]
    Matrix Area (mm²)4 - 52Difference: 0.02 [-0.22, 0.26], Slope: 0.91 [0.88, 0.94], Intercept: 1.34 [1.00, 1.67], RMSE: 3.77 [3.64, 3.89], wSD: 3.58 [3.46, 3.69]

    Study Details

    1. Sample size used for the test set and the data provenance:
      The document does not explicitly state the sample size (number of patients or cases) for the test set. It mentions "independent test data that had been blinded during the prior A.1.2 and preserved this release, has been used to avoid inadvertent bias associated with reuse of test data." The type of data includes "phantom and clinical images". The provenance is not specified by country of origin, but the data includes "ex vivo tissue specimens with paired CTA". The clinical images were retrospectively collected, given they were "preserved" from a prior release and used as "independent" and "blinded" test data.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • For tissue composition (Calcified Area, LRNC Area, Matrix Area), the ground truth for histopathology was established by three independent pathologists. Their specific qualifications (e.g., years of experience) are not detailed, but it's stated that their "agreement" was considered, accounting for "acknowledged discordance in histopathology interpretation".
      • For the positioning of annotated sections into 3D radiology volume, two independent radiologist users were involved. Their specific qualifications are not detailed, but their "differences in judgment on where the annotated section data applies within the in vivo volume" were accounted for.
    3. Adjudication method for the test set:

      • For histopathology ground truth, "three independent annotations were used for these results to account for acknowledged discordance in histopathology interpretation." This suggests a consensus or adjudicated approach, though the specific rule (e.g., majority vote, expert panel discussion) is not explicitly stated.
      • For positioning annotated sections, "four combinations resulting from two unique positioners crossed with two independent radiologist users were used for these results to account for differences in judgment." This implies a systematic evaluation of variability rather than a strict adjudication to a single ground truth.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
      No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in this document. The study focuses on analytic performance of the device itself (algorithm only) and variability related to human measurement using the device.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
      Yes, the reported performance metrics (Bias, RMSE, Correlation) represent the standalone performance of the ElucidVivo A.3 algorithm in measuring anatomical structures and tissue composition compared to established ground truths derived from histopathology and expert annotations. The phrasing "Objectives evaluated included calculations of anatomic structure (interchangeability with manual measurements as well as inter- and intra-reader variability) and calculations of tissue characteristics (compared to histopathologic specimens representing ground truth as inter- and intra-reader variability)" indicates both standalone performance and performance associated with human interaction (though not a direct human-AI comparison as per MRMC).

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For tissue composition measurements (Calcified Area, LRNC Area, Matrix Area), the ground truth was based on histopathology specimens ("histopathologic specimens representing ground truth... pathologist annotation of ex vivo tissue specimens") with expert interpretation.
      • For anatomic structure measurements, the ground truth was established through interchangeability with manual measurements and comparison to expert interpretations (radiologist users for 3D positioning).
      • Overall, the ground truth is described as "expert interpretation that the relevant scientific and clinical community relies upon for diagnosis or other specific categorization of the studied tissue."
    7. The sample size for the training set:
      The document does not provide the sample size for the training set.

    8. How the ground truth for the training set was established:
      The document does not detail how the ground truth for the training set was established, as it primarily focuses on the validation study using a separate, blinded test set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183012
    Device Name
    vascuCAP
    Date Cleared
    2018-12-21

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    vascuCAP is a medical image analysis system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired with contrast from CT imaging devices.

    vascuCAP is intended to assist trained physicians in the stratification of patients identified to have atherosclerosis. The software post processes images obtained using a multidetector CT. The package provides tools for the measurement and visualization (color coded maps) of arterial vessels.

    Clinicians can select any artery to view the following anatomical references: the highlighted vessel in 3D, two rotatable curved MPR vessel views displayed at angles orthogonal to each other, and cross sections of the vessel. Cross-sectional measurements can be obtained using standard vascuCAP software measuring tools. Clinicians can semi-automatically determine contrasted lumen boundaries, stenosis measurements, and maximum lumen diameters. In addition, clinicians can edit lumen boundaries and examine Hounsfield unit or signal intensity statistics. Clinicians can also manually measure vessel length along the centerline in standard curved MPR views.

    The measurements provided by vascuCAP are not intended to provide a diagnosis or clinical recommendations. vascuCAP is intended as a tool to complement standard of care.

    Device Description

    vascuCAP is an image analysis software package for evaluating CT images of arterial vessels. It allows the processing, review, analysis, communication, and media interchange of multidimensional digital images acquired from CT scanners. vascuCAP provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and tissue characteristics. The vascuCAP software application user interface follows typical clinical workflow patterns to process, review, and analyze digital images.

    AI/ML Overview

    The provided text describes the performance data for the vascuCAP A.1.2 device, comparing it to a predicate device. Here's a breakdown of the acceptance criteria and the study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as pass/fail thresholds against which the device was evaluated. Instead, it presents "analytic performance metrics" as established results from the testing. For each metric, the reported performance includes bias, intercept, slope, quadratic term, and R^2, along with 95% confidence intervals. The "tested range" indicates the range of true values for the measurand.

    Table: Reported Device Performance Metrics (Implicit Acceptance Criteria)

    Structure/CompositionMeasurand (Tested Range)Reported Performance (Point Estimate [95% Confidence Interval])
    StructureLumen Area (0.3 - 290.1mm²)Bias: 0.81mm² [0.3, 1.9], Intercept: 0.65mm² [-0.6, 0.9], Slope: 1.01 [0.9, 1.0], Quadratic term: 0.0 [0.0, 0.0], R²: 0.9987
    Wall Area (9.4 - 448.6mm²)Bias: 0.50mm² [-1.08, 1.29], Intercept: -0.59mm² [-4.1, 2. 8.0], Slope: 1.0 [0.99, 1.04], Quadratic term: 0.0 [0.0, 0.0], R²: 0.9974
    Stenosis** (33-69%)Vessels ≥5.9mm: Bias: 3.7% [1.29, 4.47], Intercept: 5.99% [-0.81, 9.93], Slope: 0.96 [0.84, 1.1], Quadratic term: -0.01 [-0.02, 0.01], R²: 0.8034 Vessels <5.9mm: Bias: 9.3% [2.14, 12.72], Intercept: 34.0% [-2.3, 38.9], Slope: 0.55 [0.42, 1.21], Quadratic term: 0.001 [-0.02, 0.06], R²: 0.9549
    Wall Thickness (1.0 - 9.0mm)Bias: 0.5mm [0.3, 0.6], Intercept: 0.27mm [-0.1, 0.5], Slope: 1.05 [1.01, 1.1], Quadratic term: -0.008 [-0.02, 0.01], R²: 0.9855
    Plaque Burden (0.4 -1.0 ratio)Bias: -0.01 [-0.01, .004], Intercept: 0.01 [-0.1, 0.04], Slope: 0.99 [0.9, 1.1], Quadratic term: 0.03 [-0.1, 0.3], R²: 0.9794
    CompositionCalcified Area (0.0 - 51.2mm²)Difference: 0.15mm² [-0.5, 0.97], Intercept: 0.4mm² [-0.02, 1.6], Slope: 0.9 [0.6, 1.1], Quadratic term: -0.01 [-0.1, 0.04], R²: 0.875
    LRNC Area (0.0 - 26.8mm²)Difference: 0.8mm² [-0.7, 2.6], Intercept: 1.44mm² [0.2, 3.4], Slope: 0.8 [0.2, 1.1], Quadratic term: 0.004 [-0.1, 0.3], R²: 0.5222
    Matrix Area (2.6 - 57.1mm²)Difference: -1.6mm² [-3.6, 0.32], Intercept: 2mm² [-3, 5], Slope: 0.83 [0.7, 1.0], Quadratic term: -0.01 [-0.04, 0.01], R²: 0.7469

    Note on "Acceptance Criteria": The document doesn't define explicit numerical acceptance criteria (e.g., "Bias must be less than X"). Instead, the presentation of these metrics with their 95% confidence intervals implies that the demonstrated performance as reported is considered acceptable for substantial equivalence to the predicate device. The narrative emphasizes "demonstrating that the product meets defined system requirements and features" and "established analytic performance metrics," without listing specific pre-defined thresholds.

    2. Sample Size Used for the Test Set and Data Provenance

    The document indicates that "Validation testing using phantom and clinical images was conducted."

    • Phantom Data:

      • Sample size: Not explicitly stated as a number of phantom images or measurements, but it mentions, "The mean tested phantom vessel size is 8.7mm [3.9mm]."
      • Provenance: This is lab-generated data from anthropomorphic phantoms. No country of origin is specified, but it's typically an in-house or contracted lab study.
      • Retrospective/Prospective: Phantom studies are inherently prospective, as they are designed experiments.
    • Clinical Image Data:

      • Sample size: Not explicitly stated as a number of clinical images/patients.
      • Provenance: The document refers to "histopathologic specimens" which were "ex vivo tissue specimens with paired CTA." "The tissue specimens are from the carotid artery." No country of origin is specified.
      • Retrospective/Prospective: The use of "ex vivo tissue specimens with paired CTA" and "clinically-accepted scanning protocols" suggests these were likely retrospective collections of clinical images and corresponding tissue samples.

    3. Number of Experts and Qualifications for Ground Truth

    • Structural measurements (from phantoms):

      • Number of Experts: Not applicable, as ground truth for anthropomorphic phantoms is established using "micrometer measurements."
      • Qualifications: "Micrometer measurements on anthropomorphic objects."
    • Composition (tissue types from clinical images):

      • Number of Experts:
        • Pathologists: Three (3) independent pathologists for histopathology interpretation.
        • Radiologists: Two (2) independent radiologist users for aligning annotated sections with 3D radiology volume.
      • Qualifications:
        • Pathologists: "Board certified pathologists."
        • Radiologists: Not explicitly stated, but inferred to be qualified radiologists ("independent radiologist users").

    4. Adjudication Method for the Test Set

    • Composition (tissue types):

      • Pathologist Agreement: "Three independent annotations were used for these results to account for acknowledged discordance in histopathology interpretation." This implies that the ground truth for pathology was derived from the consensus or agreement among these three pathologists, though the specific adjudication rule (e.g., majority vote, specific expert's decision) is not detailed.
      • Radiologist Agreement: "four combinations resulting from two unique positioners crossed with two independent radiologist users were used for these results to account for differences in judgment on where the annotated section data applies within the in vivo volume, blinded to vascuCAP results." Similar to pathologists, this suggests a method to account for variability in radiologist interpretation for ground truthing, though the specific adjudication rule is not provided.
    • Structural measurements (phantom data): Adjudication is not applicable as the ground truth is micrometric measurements.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no indication of an MRMC comparative effectiveness study that assesses how much human readers improve with AI vs. without AI assistance. The study focuses on the analytical performance of the device's measurements compared to ground truth, not on human reader performance.

    6. Standalone Performance

    The study primarily describes the standalone (algorithm only without human-in-the-loop performance) of the vascuCAP device. The performance metrics presented (bias, slope, R^2) are direct measures of the algorithm's accuracy against established ground truth for various vascular measurements and compositions.

    7. Type of Ground Truth Used

    • Structural Measurements (Lumen Area, Wall Area, Wall Thickness, Plaque Burden):
      • Phantom Data: Micrometer measurements on anthropomorphic objects.
    • Stenosis: Derived from the lumen diameters, with ground truth for lumen diameters from micrometer measurements on phantoms.
    • Composition (Calcified Area, LRNC Area, Matrix Area):
      • Clinical Data: Expert consensus/interpretation by "board certified pathologists of histopathologic specimens" with paired CT angiogram (CTA) data. This is further refined by "three independent annotations" from pathologists and "four combinations" from two radiologists for positioning. This suggests a multi-expert consensus ground truth for tissue characterization.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set. It focuses on the validation testing performance.

    9. How the Ground Truth for the Training Set was Established

    Since the training set size is not provided, the method for establishing its ground truth is also not described in this document. The description of ground truth establishment is specifically for the test set used in performance validation.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1