Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K243376
    Device Name
    uAngio AVIVA CX
    Date Cleared
    2025-04-28

    (180 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The system is used to perform image guidance in diagnostic, intervention and surgical procedures. Procedures that can be performed with the system include cardiac angiography, neuro-angiography, vascular and non-vascular angiography, rotational angiography and whole body radiographic/fluoroscopic procedures.

    Device Description

    The uAngio AVIVA CX is an angiographic X-ray system that generates X-rays through the X-ray tube, receives the signal through the flat panel detector and presents the image after D/A conversion and image post-processing.
    The uAngio AVIVA CX is designed to provide intelligent, safe, and precise image guidance in cardiac, neuro, oncology, peripheral interventional, and surgical procedures.
    The main components of the uAngio AVIVA CX include a C-arm stand, patient table, generator, X-ray tube, flat panel detector, collimator, grid, monitors, control module, control panel, foot switch, hand switch, V-box, and intercom.
    The main software characteristics of the uAngio AVIVA CX include patient registration, patient administration, 2D&3D image viewing and post-processing, data import/archiving, filming, camera-assisted recognition function (uSpace), and voice control function (uLingo).

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the uAngio AVIVA CX device do not contain detailed information about specific acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy) typically associated with AI-driven diagnostic tools. Instead, the document focuses on non-clinical testing to ensure the device meets safety and operational specifications, as well as satisfactory image quality for clinical use.

    However, based on the non-clinical testing section, we can infer some performance criteria and how the device reportedly meets them.

    Here's an attempt to extract and synthesize the information provided, keeping in mind the limitations of the input regarding specific "acceptance criteria" as they would apply to an AI-driven image interpretation system:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a fluoroscopic X-ray system, the "acceptance criteria" are related to technical performance and image quality rather than diagnostic AI algorithm metrics like sensitivity or specificity.

    Feature/MetricAcceptance Criterion (Implied/Stated)Reported Device Performance
    Cone Beam CT (CBCT)
    C-arm positioning accuracyAcceptable and repeatable position error."The position error of the C-arm is acceptable and the position is repeatable."
    CBCT imaging performanceFulfills requirements for in-plane uniformity, spatial resolution, reconstruction section thickness, noise, contrast to noise ratio, artifact analyses."The imaging performance of CBCT fulfills the requirements."
    Radiation dose (CBCT)CTDI of CBCT protocols fulfills the requirement."CTDI of CBCT protocols fulfills the requirement."
    Fusion (Image Registration Algorithm)
    Accuracy of Image Registration Algorithm (mTRE)Mean target registration error (mTRE) less than the pixel diagonal distance."All the mTREs of the test cases are smaller than the voxel diagonal distance, which can achieve sub-pixel accuracy registration. Image Registration Algorithm is able to meet the verification criteria, the verification result is passed."
    DSA (Digital Subtraction Angiography)
    Dynamic range (submillimeter vessel visibility)Submillimeter vessel simulation component is visible across all copper step wedges of the subtracted image."submillimeter sized blood vessels can be seen in all copper step wedges" (for four typical groups: Body, Head, Vascular, Pediatric).
    Contrast sensitivity (low millimeter vessel visibility)Low millimeter vascular simulation component should be visible in a copper step wedge of sufficient thickness in the subtraction image."the thickness of blood vessels visible in sufficiently thick copper step wedges meets the requirements."
    3D Roadmap (Neuro Registration Algorithm)
    Accuracy of neuro registration (mTRE)A precision of at least 1 mm (mTRE < 1mm)."The registration results of neuro registration algorithm is that the mTRE is less than 1mm. The neuro registration was able to meet a precision of at least 1 mm, the verification was passed."
    uChase (Stitching Algorithm)
    Accuracy of Image Registration Algorithm in Stitching (mTRE)Mean target registration error (mTRE) less than the pixel diagonal distance."All the mTREs of the test cases are smaller than the pixel diagonal distance"
    Clinical utility of stitching resultsAverage score from clinical specialists higher than 2."the average score of stitching results is 2.95, which is higher than 2.The Stitching Algorithm was able to meet the verification criteria and the verification of the algorithm was passed."
    uSpace (Camera-assisted recognition)
    Human key point detection accuracyRequired accuracy for not deviating from the range of the acquisition field of view."Human key point detection achieves the required accuracy for not deviating from the range of the acquisition field of view."
    Collision detection rateSpecified detection rate."Collision detection achieves the specified detection rate."
    Auto SID adjustment accuracyRequired accuracy for maintaining a distance from detector to the patient during C-arm rotation or patient table lifting."Auto SID adjustment achieves the required accuracy for maintaining a distance from detector to the patient during C-arm rotation or patient table lifting."
    Radiation safety (different SID)Tested within allowable adjustment range; meets requirements."The radiation safety test results of the uAngio AVIVA CX under different SID. [...] meet the requirements."
    Image quality (different SID)Meets requirements based on clinical evaluation and objective physical characteristics."The image quality evaluation was tested through the clinical evaluation and objective physical characteristics. According to the acceptable criteria in the regulations, the image quality test results of the uAngio AVIVA CX under different SID meet the requirements."
    uLingo (Voice control)
    Wake-up algorithm recognition rate (SNR ≥ 15 dB(A))≥95% accuracy rate."The recognition rate of voice wake-up is tested as below: When the environmental signal-to-noise ratio (SNR) is ≥15 dB(A), the wake-up accuracy rate reaches ≥95%"
    Wake-up algorithm recognition rate (SNR = 10 dB(A))≥85% accuracy rate."When environmental SNR is 10 dB(A), the wake-up accuracy rate reaches ≥85%."
    Voice command recognition rate (SNR ≥ 15 dB(A))≥95% success rate for each command."When the environmental SNR is ≥15 dB(A), the recognition success rate for each command reaches ≥95%"
    Voice command recognition rate (SNR = 10 dB(A))≥85% success rate for each command."When the environmental SNR is 10 dB(A), the recognition success rate for each command reaches ≥85%"
    Clinical Image EvaluationImage quality fulfills the needs for diagnostic, intervention, and surgical procedures (score ≥ 3 on a five-point scale for spatial detail, contrast-noise performance, clinical motion, and clinical features of interest)."Each image received a score of ≥ 3 and received a PASS result, indicating that image quality fulfills the need for diagnostic, intervention, and surgical procedures."

    2. Sample size(s) used for the test set and the data provenance

    • CBCT, Fusion, DSA, 3D Roadmap, uChase (Stitching Algorithm): These refer to various "test cases" or "test data." The exact number of images/cases is not specified for most, but:
      • Fusion: "Each group has three sets of test data." (Groups refer to DSA mask registered with DSA mask, CBCT, CT, CTA, and MR).
      • DSA: Four typical groups (Body, Head, Vascular, Pediatric) were tested, with "each group applying two sets of test data."
      • 3D Roadmap: "It has eight sets of test data in the group."
      • uChase: Refers to "test cases," specific number not listed.
    • uLingo (Voice control): "For U.S. Participants, 18 talkers were invited to record the voice commands for verification."
    • Clinical Image Evaluation: "Sample images were obtained from hospitals to meet the following requirements: contained images of the head, cardiac, body, and extremities with different acquisition protocols representative of US clinical use and populations. Typical cases have been selected such as Internal carotid artery angioplasty and stenting, internal carotid artery stenosis and aneurysm, PCI, radiofrequency ablation, TACE, TIPS, lower extremity arterial angioplasty and so on." The document does not specify the exact number of images or cases in this clinical image evaluation.
    • Data Provenance:
      • uLinkgo: "U.S. Participants."
      • Clinical Image Evaluation: "obtained from hospitals," "representative of US clinical use and populations." This suggests retrospective data from US clinical settings. Other technical tests do not explicitly state data provenance but imply internal testing environments.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • uChase (Stitching Algorithm): "average score from clinical specialists." The number of specialists and their specific qualifications are not provided, only stating "clinical specialists."
    • Clinical Image Evaluation: "[images] evaluated by an ADR certified radiologist." This indicates one expert. The qualification specified is "ADR certified radiologist."

    4. Adjudication method for the test set

    • uChase (Stitching Algorithm): Ground truth seems to be implicitly established by "clinical specialists" providing an "average score." The method of combining scores (e.g., majority vote, consensus) or a formal adjudication process is not detailed.
    • Clinical Image Evaluation: "evaluated by an ADR certified radiologist." This implies a single reader assessment rather than an adjudication process involving multiple experts for consensus.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not describe a multi-reader, multi-case (MRMC) comparative effectiveness study involving human readers with and without AI assistance for device functionality like image interpretation. The AI components mentioned (uSpace for auto-positioning, uLingo for voice control, image quality algorithms) are primarily aimed at workflow enhancement and physical control of the device, not direct diagnostic interpretation or assistive reading. Therefore, no effect size of human reader improvement with AI assistance is provided.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, various standalone algorithm performances were evaluated:

    • CBCT imaging performance metrics.
    • Fusion algorithm accuracy (mTRE).
    • DSA algorithm performance (vessel visibility).
    • 3D Roadmap algorithm accuracy (mTRE).
    • uChase Stitching Algorithm accuracy (mTRE).
    • uSpace performance (human key point detection, collision detection, Auto SID adjustment).
    • uLingo algorithm performance (wake-up and command recognition rates).

    These evaluations assess the algorithms' intrinsic performance without direct human-in-the-loop diagnostic interpretation.

    7. The type of ground truth used

    • CBCT, Fusion, DSA, 3D Roadmap, uChase (Stitching Algorithm): For these technical measurements, the ground truth is established through physical phantoms (Geometric phantom, Catphan700 phantom, chest phantom, head phantom, CTDI dosimetry phantom, body phantom, digital subtraction angiography phantom with compensation test step wedge) and objective quantitative metrics (e.g., mTRE, pixel diagonal distance, stated requirements for vessel visibility, accuracy thresholds).
    • uSpace (Camera-assisted recognition): Inferred to be based on objective measurements of physical distance, field of view, and collision events.
    • uLingo (Voice control): Based on objective recording and analysis of recognition rates against known voice commands and environmental conditions.
    • Clinical Image Evaluation: "image quality was evaluated by an ADR certified radiologist using a five-point scale," indicating expert opinion/consensus on image quality, rather than pathology or outcomes data.

    8. The sample size for the training set

    The document does not specify the sample size for the training set for any of the machine learning components (uSpace, uLingo, image registration algorithms). It only mentions "machine learning methods" are used.

    9. How the ground truth for the training set was established

    The document does not describe how the ground truth for the training set was established for any of the machine learning components. It only mentions the "machine learning methods" are used for uSpace and uLingo but provides no details on their training data or labels.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1