K Number
K221100
Device Name
Viz RV/LV
Manufacturer
Date Cleared
2022-08-29

(137 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Viz RV/LV Software device is designed to measure the maximal diameters of the right and left ventricles of the heart from a volumetric CTPA acquisition and report the ratio of those measurements. Viz RV/LV analyzes cases using an artificial intelligence algorithm to identify the location and measurements of the ventricles. The Viz RV/LV software provides the user with annotated images showing ventricular measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of CTPA cases.

Device Description

The Viz RV/LV is a software-only device that uses a locked artificial intelligence machine learning (AI/ML) algorithm to measure the maximal diameters of the right and left ventricles of the heart from a computed tomography pulmonary angiogram (CTPA) and report the ratio of those measurements. Viz RV/LV produces an Annotated Image Series (Figure 1) and an RV/LV Summary Report (Figure 2) in DICOM format. The Annotated Image Series shows an RGB overlay on each slice of the input scan: The red and blue solid lines indicate the maximum ventricular diameter for each ventricle. The dashed line indicates a diameter measured on a slice that is within 10 slices of the global maximum ventricular diameter. The interventricular septum is marked in solid green on all images where diameters are marked. The maximal diameter is presented along with solid lines on slices with global maximum diameter. The RV/LV Summary Report summarizes the results of the ventricle analysis and shows the slices with the maximum right and left ventricular diameters. The lines measuring the maximum RV and LV diameters are displayed over the original CTPA slice image, along with the lengths of the largest RV and LV diameters, and the RV/LV ratio. Viz RV/LV is hosted on Viz.ai's Backend Server and analyzes applicable CTPA scans that are acquired on CT scanners and are forwarded to Viz.ai's Backend Server. The results of the analysis are exported in DICOM format are sent to a PACS destination for review by thoracic radiologists, general radiologists, pulmonologists, cardiologists, or other similar physicians to assist in the assessment of right ventricle enlargement.

AI/ML Overview

Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text.

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are not explicitly stated as distinct numerical targets in the same way that a "performance goal" for MAE is mentioned. However, based on the wording, we can infer the primary acceptance criterion for the algorithm's accuracy from the clinical performance section.

Metric / CriterionAcceptance Criteria (Inferred)Reported Device Performance
Mean Absolute Error (MAE)MAE between the algorithm's measurement and established ground truth less than 7.2 mm (performance goal).The study demonstrated that the MAE was less than 7.2 mm between the established ground truth.
Agreement (General)High degree of agreement between algorithm measurements and manually obtained measurements."The algorithm's ventricle diameter measurements were aligned when compared against the measurements that were obtained manually." and "There was a high degree of agreement between the different trained radiologist as demonstrated by statical analysis."
Clinical PerformanceDemonstrate safety and effectiveness comparable to the predicate device."Clinical performance data demonstrated that the device is as safe and effective as the previously cleared Imbio RV/LV software (K203256)."
StrandingNo overlapping data between training and pivotal study."There was no overlapping data between the training sets and the pivotal study in terms of time and patient images."

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: Not explicitly stated as a number of cases, but the study implies a sufficient sample for statistical analysis. It mentions "The 4 clinical sites used in the pivotal study" are a subset of 13 larger sites.
  • Data Provenance:
    • Country of Origin: Not explicitly stated, but the submission is to the U.S. FDA, implying compliance with U.S. regulatory standards. Clinical sites are mentioned, suggesting real-world data collection.
    • Retrospective or Prospective: Not explicitly stated, but the mention of "4 clinical sites used in the pivotal study were a subset of a larger 13 sites used as part of the training data set" and "no overlapping data between the training sets and the pivotal study in terms of time and patient images" suggests that the test set data was collected independently from the training data, likely retrospectively for the purpose of this analysis, but drawn from existing clinical acquisitions.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

  • Number of Experts: Not explicitly stated as a specific number, but referred to as "trained radiologists" (plural), indicating more than one.
  • Qualifications of Experts: Described as "trained radiologists." Specific experience (e.g., "10 years of experience") is not provided.

4. Adjudication Method for the Test Set

  • Adjudication Method: Not explicitly detailed. The text states, "There was a high degree of agreement between the different trained radiologist as demonstrated by statical analysis," suggesting that the ground truth was established through some form of consensus among these multiple radiologists. This could imply a majority vote, averaging, or a formal consensus meeting, but the specific method (e.g., 2+1, 3+1) is not provided.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • A MRMC comparative effectiveness study was not specifically described in the provided text. The study primarily focused on validating the standalone performance of the Viz RV/LV algorithm against a human-established ground truth. The role of the device is to "provide the user with annotated images showing ventricular measurements" and is "not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of CTPA cases," suggesting it is a human-in-the-loop aid, but the study did not quantify the improvement of human readers with or without the AI assistance.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Yes, a standalone performance study was done. The text explicitly states, "Clinical testing was performed as a study comparing the Viz RV/LV's output to the ground truth as established by trained radiologists." This means the algorithm's raw output was directly compared to the expert ground truth, without human intervention or modification of the algorithm's output.

7. The Type of Ground Truth Used

  • Type of Ground Truth: Expert Consensus / Manual Measurements. The ground truth was "established by trained radiologists" and involved "measurements that were obtained manually."

8. The Sample Size for the Training Set

  • Training Set Sample Size: Not explicitly stated as a number of cases, but it mentions that "a larger 13 sites used as part of the training data set." The absolute number of cases from these 13 sites is not given.

9. How the Ground Truth for the Training Set was Established

  • The text does not explicitly detail how the ground truth for the training set was established. It only mentions that the "4 clinical sites used in the pivotal study were a subset of a larger 13 sites used as part of the training data set," implying that the training data also came from clinical sources. One can infer that it likely involved similar methods of expert annotation or manual measurement, given the nature of the task and the validation approach. However, specific details are absent.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).