Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K242240
    Device Name
    CaRi-Plaque
    Date Cleared
    2025-02-20

    (205 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CaRi-Plaque is intended to provide an optimized non-invasive application to analyze coronary anatomy and pathology and aid in determining treatment paths from a set of Computed Tomography (CT) Angiographic images.

    CaRi-Plaque is a web-based image processing application. It is a non-invasive diagnostic reading software intended for use as an interactive tool for viewing and analyzing cardiac CT data for determining the presence and extent of coronary plaques and luminal stenoses.

    CaRi-Plaque is intended for use by internal operators who have been appropriately trained in the software's functions, capabilities and limitations.

    Users should be aware that certain views make use of interpolated data. This is data that is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology may be present that is near or smaller than the scanning resolution.

    The analysis results produced by the software and provided to the Healthcare Professional are not intended to replace the skill and judgment of a qualified medical practitioner. The analysis results should be reviewed with other clinical information which may include but is not limited to: The patient's original CT images, clinical history, symptoms, clinical risk factors, results of other diagnostic tests, and the clinical judgement of appropriately qualified Healthcare Professionals.

    Device Description

    CaRi-Plaque v1.0 ("CaRi-Plaque," the subject device) is a web-based software-only application for the quantitative and qualitative clinical analysis of previously acquired CCTA DICOM data for the purpose of characterizing and quantifying plaque formation and stenosis in coronary arteries. CaRi-Plaque aids healthcare professionals trained in cardiovascular health and patient care (including but not limited to Cardiologists, Radiologists and others) by describing the physical characteristics of coronary plaque volume and cross- sectional area, determined using a combination of 3D image thresholding computerized algorithms and manual editing tools to provide automated quantification and characterization of coronary atherosclerotic plaque and stenosis. The processing of CT scan data is performed by trained operators and the resulting CaRi-Plaque Report is provided to the healthcare professional to enable them to assess the extent and severity of coronary disease.

    The CaRi-Plaque report includes visual representations of each vessel and associated quantitative outputs. These quantitative outputs include: Plaque Burden, Calcified Plaque (CP) Volume, Total Plaque Volume, Noncalcified Plaque (NCP) Volume, Low Density Noncalcified Plaque (LD-NCP) Volume, Remodeling Index, and Maximum Stenosis. CaRi-Plaque does not replace standard clinical practice or clinical decisionmaking, and the results of the CaRi-Plaque analysis are to be used in the context of other patient information by the healthcare professional. The healthcare professional may request a re-analysis of the CT scan data if they do not agree with the report analysis.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:


    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Pearson Correlation Coefficient for:
    - Maximum Stenosis0.930
    - Total Plaque Volume0.985
    - Calcified Plaque Volume0.999
    - Noncalcified Plaque Volume0.977
    - Plaque Burden0.885
    - Low Density Noncalcified Plaque Volume0.817
    Cohen's Kappa for Remodeling IndexAgreement of 63.3% (K = 0.42)
    Branch-level Pearson Correlation Coefficients (or Cohen's Kappa for Remodeling Index) for RCA, R-PDA, R-PLB:- Maximum Stenosis: 0.863 - Total Plaque Volume: 0.958 - Calcified Plaque Volume: 0.964 - Noncalcified Plaque Volume: 0.953 - Plaque Burden: 0.924 - Low Density Noncalcified Plaque Volume: 0.633 - Remodeling Index: 90.3% (K=0.31)
    Branch-level Pearson Correlation Coefficients (or Cohen's Kappa for Remodeling Index) for LAD, D1, D2, Ramus:- Maximum Stenosis: 0.929 - Total Plaque Volume: 0.959 - Calcified Plaque Volume: 0.997 - Noncalcified Plaque Volume: 0.942 - Plaque Burden: 0.913 - Low Density Noncalcified Plaque Volume: 0.802 - Remodeling Index: 87.2% (K=0.44)
    Branch-level Pearson Correlation Coefficients (or Cohen's Kappa for Remodeling Index) for LCX, OM1, OM2:- Maximum Stenosis: 0.911 - Total Plaque Volume: 0.948 - Calcified Plaque Volume: 0.991 - Noncalcified Plaque Volume: 0.934 - Plaque Burden: 0.888 - Remodeling Index: 90.3% (K=0.31) (Note: Insufficient low density non-calcified plaque in these branches to generate a meaningful correlation, thus no coefficient is reported for this specific metric.)
    Inter-operator agreement and repeatabilityGood inter-operator agreement and repeatability at the branch level within the CaRi-Plaque device arm and between readers in the Ground Truth arm.

    Study Details:

    1. Sample size used for the test set and the data provenance:

      • Sample Size: 117 subjects (85 men, 32 women) aged 27 to 85 years of age.
      • Data Provenance: Multi-center, international patient population from four (4) sites (2 US and 2 OUS - Outside US).
        • Individual subject-level ethnicity for 57 subjects: White 76%, Asian 4%, Middle Eastern 1%, other 14%, and unknown 5%.
        • Ethnicity for the remaining 60 subjects (48 subjects from one site, 12 from another) was estimated from local population census data (www.census.gov). For the first site: White (not Hispanic or Latino) 44%, Black 22%, Hispanic or Latino 27%, and Asian 5%; for the second site: White (not Hispanic or Latino) 46%, Black 19%, Hispanic or Latino 26%, and Asian 13%.
        • CT scanners used included commercial CT scanners from Toshiba, GE, Phillips, and Siemens.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document states "qualified independent medical experts performed ground truthing." It does not specify the exact number of experts or their specific qualifications (e.g., number of years of experience, subspecialty).
    3. Adjudication method for the test set:

      • The document describes "agreement between the expert readers and the CaRi-Plaque device measurements." It does not explicitly state an adjudication method like 2+1 or 3+1 for resolving discrepancies among expert readers themselves, but rather refers to them collectively for "ground truth".
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating the improvement of human readers with AI assistance versus without AI assistance was not mentioned or reported. The study focused on the agreement between the device's measurements and expert-determined ground truth.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the clinical validation study measured the agreement between the CaRi-Plaque device measurements (algorithm's output) and expert-determined ground truth. This indicates a standalone performance evaluation of the algorithm against the established ground truth. The device is described as "aid[ing] healthcare professionals" and its results are "provided to the Healthcare Professional," implying the algorithm works standalone to produce these measurements for subsequent review.
    6. The type of ground truth used:

      • Expert Consensus / Expert Interpretation: The ground truth was "ground truth determined by qualified independent medical experts." This implies expert consensus or interpretation was used.
    7. The sample size for the training set:

      • The document does not specify the sample size for the training set. It only describes the clinical validation study (test set).
    8. How the ground truth for the training set was established:

      • The document does not specify how the ground truth for the training set was established, as information about the training set itself is not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200274
    Device Name
    CariCloud
    Date Cleared
    2020-05-21

    (107 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CariCloud is a software device used by operators to evaluate attenuation in the coronary arteries and surrounding tissue in CCTA images.
    CariCloud is to be used by trained operators. CariCloud analysis results are to be used by Healthcare Professionals and may assist in diagnosis.
    CariCloud analysis results are indicated for use for all patients referred for CCTA imaging.

    Device Description

    CariCloud is an image processing prescription software device intended to be used to display, manipulate and quantify previously acquired CT images.
    Datasets are downloaded from remote systems for clinical interpretation. Trained users will initiate and oversee image analysis and intervene when necessary to correct processing errors. The outcome of analysis will be used to create a summary report that includes qualitative and quantitative analysis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for CariCloud v1.0, based on the provided text:

    Acceptance Criteria and Device Performance

    The submission primarily focuses on demonstrating substantial equivalence to a predicate device (TeraRecon iNtuition) through performance testing related to inter-operator and intra-operator variability in measurements. The acceptance criteria aren't explicitly stated as numerical thresholds for specific metrics, but rather derived from the performance of the predicate device and the conclusion that the new device's performance is "excellent" and shows "no significant difference."

    Acceptance Criteria (Implied)Reported Device Performance (CariCloud v1.0)
    Inter-Operator Variability (ICC)
    - PFA: Excellent (comparable to predicate's 0.998-0.999)- Reader 1: 0.997
    - TVOI-A: Excellent (comparable to predicate's 0.983-0.986)- Reader 2: 0.997
    - TVOI-V: Excellent (comparable to predicate's 0.973-0.991)- Reader 3: 0.994
    - TROI-A: Excellent (comparable to predicate's 1.0)- Reader 1: 0.987
    Intra-Operator Variability (ICC)- Reader 2: 0.971
    - ICC > 0.96 for all measures- Reader 3: 0.982
    - Maximum difference in ICC between devices ≤ 0.017- Reader 1: 0.971
    No significant difference in inter- or intra-operator variabilityConclusion: "No significant difference between the inter-operator variability and intra-operator variability results attained on the two devices." and "For all measures, the intra-operator agreement achieved on both the predicate device and the new device for each operator was excellent (ICC greater than 0.96) with a maximum difference in ICC between the two devices was 0.017."
    Safety and EffectivenessConclusion: "CariCloud v1.0 does not raise any new questions of safety or effectiveness as compared to the predicate device."

    Note: The specific meaning of PFA, TVOI-A, TVOI-V, and TROI-A are not explicitly defined in the provided document, but they represent different measures of attenuation in coronary arteries and surrounding tissue in CCTA images, as indicated by the device's intended use.


    Study Details: Performance Testing - Inter-operator and Intra-operator Variability

    1. Sample size used for the test set and the data provenance:

      • The document does not specify the sample size (number of cases/patients) used for the test set.
      • The document does not specify the data provenance (e.g., country of origin, retrospective or prospective) for the test set.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document mentions "Reader 1," "Reader 2," and "Reader 3" in the ICC table, indicating that three operators were involved in the testing.
      • The qualifications of these operators are not provided. The Indications for Use state "CariCloud is to be used by trained operators," implying these readers were trained on the device.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • The document states that ICC values were calculated "between individual results of each read by each operator on each device for each measure based on an average-rating, absolute-agreement, 2-way mixed-effects model." This implies that the individual reads of each operator were compared against each other, rather than against a single adjudicated ground truth derived from multiple experts. Therefore, a formal adjudication method like 2+1 or 3+1 for ground truth establishment is not mentioned and appears not applicable in the context of this inter/intra-operator variability study.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • This document describes a study comparing the performance consistency (inter-operator and intra-operator variability) of the device itself (CariCloud v1.0) and a predicate device, as measured by human readers using them. It is not an MRMC comparative effectiveness study evaluating the improvement of human readers with AI assistance versus without AI assistance. Therefore, no such effect size is reported.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • The testing described focuses on human operators using the device to make measurements. It is a "software device used by operators to evaluate attenuation." While the device itself performs analysis, the performance metrics reported here (ICC values) are derived from "individual results of each read by each operator." Therefore, a standalone (algorithm-only) performance study, independent of human interaction or measurement, is not explicitly described or reported in this section. The device's description does mention "trained users will initiate and oversee image analysis and intervene when necessary to correct processing errors," suggesting active human involvement.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the inter-operator and intra-operator variability study, the "ground truth" for the ICC calculation is effectively the agreement between the operators' measurements themselves, or the agreement of an individual operator's repeated measurements. It does not refer to an independent, definitive medical ground truth like pathology or outcomes data. The study assesses the consistency of measurements obtained using the devices.
    7. The sample size for the training set:

      • The document does not provide information regarding the sample size of any training set used for the CariCloud v1.0 software.
    8. How the ground truth for the training set was established:

      • Since no training set information is provided, there is also no information on how the ground truth for any training set might have been established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1