Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K244002
    Device Name
    AngioWaveNet
    Date Cleared
    2025-09-10

    (258 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AngioWaveNet is indicated for use by qualified physicians or under their supervision to aid in the analysis and interpretation of X-ray coronary angiographic cines. AngioWaveNet is intended for use in adults during X-ray coronary angiographic imaging procedures as a clinically useful complement to the viewing of standard angiographic cines acquired during diagnostic coronary angiography procedures. AngioWaveNet software is intended for use to enhance the visibility of blood vessels, vascular structures, and related anatomical features within angiographic images, which may be clinically useful to the treating physician

    Device Description

    AngioWaveNet spatio-temporal enhancement processing (STEP) is an artificial Intelligence (AI) and machine learning (ML) system designed to enhance the visibility of blood vessels in angiograms using the unique spatial and temporal information contained in the frames of angiographic cines. The Angiowave STEP method employs a neural network architecture in the form of an encoderdecoder, which sequentially takes multiple contiguous frames of an angiogram as input and uses this information to provide enhanced visualization of vessels in the central frame. Angiowave has developed a novel and versatile implementation of its processing in a DICOM node, which has the benefit of no additional on-premises hardware. In addition to the cine processing, the DICOM node handles other logistical tasks such as anonymization, image storage and retrieval (e.g. to/from a cloud location), communication and interoperability, data integrity and security, DICOM conformance, and data archiving and management. This full implementation utilizing a cloud location for processor intensive tasks is termed AngioWaveNet. The AI/ML model at the heart of STEP was trained on a comprehensive dataset of 300 anonymized angiograms, averaging 70 frames each, provided by a large non-profit healthcare organization that operates in Maryland and the Washington, D.C. region. The dataset spanned a range of clinical and demographic characteristics presenting to the catheterization laboratory and was acquired from 2003 to 2016 using Philips Allura Xper systems. The dataset was randomly sampled from a large clinical study population, whose baseline patient characteristics have been published and were consistent with a typical coronary catheterization lab population.

    AI/ML Overview

    Here's a structured summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for AngioWaveNet:


    Acceptance Criteria and Device Performance Study for AngioWaveNet

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriterionReported Device Performance (AngioWaveNet)
    I. Processing Success Rate100% processing success rate for analyzed cases.Achieved: 100% processing success rate, all analyzed cases met predefined patient-level success criteria.
    II. Clinical Decision Impact (CPI)Neutral or positive clinical decision impact (Likert score $\ge$ 3).Achieved: Mean Likert score of 3.23 (range 3.12–3.44 across three readers), indicating neutral or positive impact.
    III. Ease of Visualization ImprovementImprovement in ease of visualization for a significant percentage of tasks.Achieved: Improved in 99.4% of tasks.
    IV. False Positives/Negatives0% unresolved false positives/negatives for most readers.Achieved: 0% unresolved false positives/negatives for most readers.

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size (Patients): 31 individual patients.
    • Sample Size (Cines/Angiograms): 97 angiograms (cines), with each patient contributing 3-4 cines (mean 3.13 cines/patient).
    • Sample Size (Vessels Assessed): 169 vessels.
    • Sample Size (Tasks Performed): 3,211 tasks (detection, localization, quantification, characterization) performed across all cines.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but derived from the "Corewell Angiographic database," suggesting data from a healthcare system in the United States (potentially consistent with the "Maryland and the Washington, D.C. region" mentioned for training data, though this is for the test set).
      • Retrospective/Prospective: The data was "sourced from the Corewell Angiographic database," indicating it is retrospective. The cines were captured in "March of 2025," which seems to be a clerical error given the "Date Prepared: August 4, 2025" and "Dated: August 5, 2025" for the submission, and the study being "conducted in July and August of 2025." It is highly likely the data was captured prior to the study conduct date.

    3. Number of Experts and Qualifications for Ground Truth for the Test Set

    • Number of Experts: Three (3).
    • Qualifications of Experts: "Experienced interventional cardiologists." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    The document states, "Blinding of readers to each other's assessments... prevented influence of one reader on another." This suggests that the readers made their assessments independently. However, it does not explicitly describe an adjudication method (like 2+1 or 3+1 consensus) for resolving discrepancies or establishing a single "ground truth" for the test set from the three readers' evaluations. The reported results (e.g., mean Likert score, percentage improvement) appear to be an aggregate of their individual assessments without a formal adjudication process to reconcile differences.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes, a task-based reader study involving three interventional cardiologists evaluating patient cases with the software was conducted.
    • Effect size of human readers improvement with AI vs. without AI assistance: The study focused on the impact of the software on decision-making and visualization, rather than a direct comparison of readers with and without AI assistance for a specific metric.
      • Clinical Decision Impact: Mean Likert score of 3.23 (neutral to positive impact).
      • Ease of Visualization: Improved in 99.4% of tasks.
      • False Positives/Negatives: 0% unresolved for most readers.
        While these indicate improvement in perception and influence on decisions, a direct "effect size" of how much readers improve in accuracy or efficiency due to AI assistance compared to no AI assistance is not quantified in the provided text (e.g., AUC difference, sensitivity/specificity gains). The study rather reports the performance when using the AI as a complement.

    6. Standalone Performance Study

    The information provided describes a "Task-Based Reader Study" where human readers (cardiologists) assessed the software's impact. The software's performance is reported in terms of its ability to enhance visualization and influence clinical decisions when used by these readers. This is not a standalone (algorithm only without human-in-the-loop performance) study. The results are intrinsically linked to human interpretation of the enhanced images.

    7. Type of Ground Truth Used for the Test Set

    The ground truth appears to be based on the expert consensus or interpretation of the three interventional cardiologists regarding the "angiographic pathologic determination tasks" and "ease of visualization." There is no mention of an independent, objective ground truth such as pathology reports or long-term outcomes data for the test set.

    8. Sample Size for the Training Set

    • Sample Size (Angiograms): 300 anonymized angiograms.
    • Sample Size (Frames): Averaging 70 frames each (total of approximately 21,000 frames).

    9. How the Ground Truth for the Training Set Was Established

    The document states, "The AI/ML model at the heart of STEP was trained on a comprehensive dataset of 300 anonymized angiograms... provided by a large non-profit healthcare organization that operates in Maryland and the Washington, D.C. region."

    It does not explicitly describe how the ground truth for this training set was established. It mentions the dataset "spanned a range of clinical and demographic characteristics" and was "randomly sampled from a large clinical study population." Typically, for training such models, ground truth would involve expert annotations (e.g., outlining vessels, identifying pathologies) on the original images, but this detail is missing from the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1