Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K122578
    Manufacturer
    Date Cleared
    2013-02-25

    (186 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VITREA CT TRANSCATHETER AORTIC VALVE REPLACMENT (TAVR) PLANNING

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning is a non-invasive postprocessing application designed to assist medical professionals with the assessment of the aortic valve and in pre-operational planning and post-operative evaluation of transcatheter aortic valve replacement procedures.

    Device Description

    Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning is a non-invasive postprocessing application designed to assist medical professionals with the assessment of the aortic root and in pre-operational planning and post-operative evaluation of transcatheter aortic valve replacement procedures.

    It allows cardiologists, radiologists and clinical specialists to select patient CT studies from various data sources, view them, and process the images with the help of a comprehensive set of tools. It provides assessment and measurement of different structures of the heart and vessels relevant to approach planning. It provides simple techniques to assess the feasibility of a trans-apical, iliac, or subclavian approach to heart structures for replacement or repair procedures.

    AI/ML Overview

    The provided text describes Vitrea CT Transcatheter Aortic Valve Replacement (TAVR) Planning, a non-invasive post-processing application. The 510(k) summary focuses on demonstrating substantial equivalence to a predicate device (Pie Medical Imaging B.V., 3mensio Workstation (K120367)) rather than establishing specific acceptance criteria and proving performance against them in a detailed study.

    Therefore, much of the requested information (acceptance criteria, reported performance values, specific sample sizes for test and training sets, expert qualifications, adjudication methods, MRMC study details, ground truth types) is not explicitly stated in the provided document in the format typically associated with a robust performance study.

    However, based on the non-clinical testing section and external validation summary, we can infer some aspects and report what is available.


    1. ⁠Table of Acceptance Criteria and Reported Device Performance

    The document does not provide a formal table of quantitative acceptance criteria with corresponding reported device performance values. Instead, it describes general validation goals and outcomes.

    Feature/FunctionalityAcceptance Criteria (Inferred from Validation Goals)Reported Device Performance
    Accuracy of spatial measurements (distance, angle)Spatial accuracy of image rendering, distance, angular measurement, and navigational tools aligned with known values from imaging phantoms.Verified using imaging phantoms with known positions, distances, and angles. Specific numerical accuracy values are not provided, but the verification confirmed alignment with expected results for spatial accuracy, distance measurement, angular measurement, navigational tools, and orientation markers.
    Accuracy of automated segmentation for 3D anatomical representationAutomated segmentation should enable accurate 3D representation of relevant anatomy.During external validation with 70 TAVR cases, "Each user felt that the automated segmentation within the application enabled an accurate 3D representation of the relevant anatomy."
    Accuracy of automated oblique for annulus valve planeAutomated oblique should provide an accurate starting point for determining the annulus valve plane.During external validation with 70 TAVR cases, "The users also felt the automated oblique provided an accurate starting point for determining the annulus valve plane."
    User ability to review, verify, adjust, measure, and reportUsers should be able to effectively use the software's tools (review 2D/3D images, verify/correct segmentation, create measurements, generate reports).During external validation with 70 TAVR cases, "All of the users were able to review the 2D and 3D images, verify and correct the results of segmentations and initialization, create measurements, and generate reports."

    2. ⁠Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 70 TAVR cases were used for "External Validation."
    • Data Provenance: Not explicitly stated (e.g., country of origin, specific institutions). It is referred to as "previously acquired medical images," suggesting a retrospective nature, but this is not explicitly confirmed. The type of study is "non-clinical tests" and "evaluation of previously acquired medical images."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not specified. The document states that "external cardiologists and radiologists evaluated 70 TAVR cases" during external validation. It does not clarify how many individual experts were involved.
    • Qualifications of Experts: "External cardiologists and radiologists." No specific details on their years of experience or expertise are provided beyond their general medical specializations.
    • Ground Truth Establishment for Test Set: The exact method for establishing ground truth for the 70 TAVR cases is not detailed. The "external cardiologists and radiologists" evaluated the application's performance, but how their evaluations were aggregated or compared against a definitive "ground truth" (e.g., surgical outcome, independent expert consensus) is not described. The statements suggest they evaluated the device's output against their clinical judgment (e.g., "felt that the automated segmentation...enabled an accurate 3D representation").

    4. ⁠Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (like 2+1 or 3+1 consensus) for the external validation or for establishing ground truth for the test set.


    5. ⁠If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • An MRMC comparative effectiveness study, comparing human readers with and without AI assistance, was not described. The external validation involved users (cardiologists and radiologists) evaluating the application, but it doesn't indicate a comparison against a baseline without the software or quantifies an improvement in human reader performance.

    6. ⁠If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The document describes "automated segmentation" and "automated oblique" as features of the software. The verification testing involved "applying risk management," "performance testing (Verification)," and "safety testing (Verification)," which implies testing of the algorithm's functions. Measurement accuracy tests using phantoms would also be a standalone assessment of the algorithm's precision.
    • However, the overall validation described (internal and external) focuses on the user's interaction with and ability to verify/adjust the software's outputs, indicating a human-in-the-loop context. No specific "standalone performance study" with metrics like sensitivity, specificity, or accuracy for the algorithm in isolation (without user intervention/correction) is explicitly reported.

    7. ⁠The Type of Ground Truth Used

    • For Measurement Accuracy: Known positions, distances, and angles from imaging phantoms were used as the ground truth.
    • For External Validation (70 TAVR cases): Not explicitly stated. The document implies a qualitative assessment by "external cardiologists and radiologists" who "felt" the segmentation and oblique were accurate and "were able to" perform their tasks. This suggests a form of expert subjective evaluation rather than a direct comparison against a definitive objective ground truth like pathology, surgical outcomes, or a pre-established consensus gold standard.

    8. ⁠The Sample Size for the Training Set

    • The document does not provide the sample size used for the training set for the Vitrea CT TAVR Planning application. The focus is on validation against the predicate device and usability.

    9. ⁠How the Ground Truth for the Training Set Was Established

    • Since the training set size is not mentioned, the method for establishing its ground truth is also not described in the provided document. The 510(k) summary focuses on validating the device for regulatory clearance, primarily through demonstrating substantial equivalence and usability, rather than detailing the underlying machine learning model development.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1