(64 days)
LVivo platform is intended for non-invasive processing of ultrasound images to detect, measure, and calculate relevant medical parameters of structures and function of patients with suspected disease. In addition, it has the ability to provide Quality Score feedback.
The LVivio IQS is an extension to the LVivio IQS (K222970), as an additional Algorithm with API that will be able to provide a Quality Score in real time to the Right Ventricle from the 4-chamber apical view of the heart. In addition, the LVivo IQS will be provided as a software component to be integrated by another computer programmer into their legally marketed ultrasound imaging device. Essentially, the Algorithm and API, which is a module, is a medical device accessory.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
The device is an extension to the LVivo IQS (K222970) that provides a Quality Score in real-time to the Right Ventricle (RV) from the 4-chamber apical view of the heart. The acceptance criteria focus on the agreement of the device's quality scoring with expert assessment and the clinical interpretability of images deemed "Medium" or "Good" quality by the device.
Acceptance Criteria | Reported Device Performance |
---|---|
Study 1: Agreement with Sonographer Tagging | |
Overall agreement of 75% between the LVivo IQS results and the data tagging by experienced sonographers. | The overall agreement between the LVivo IQS (RV) and quality tagging by the experienced sonographers was 77%. (Meets criteria) |
Study 2: Real-time Use and Clinical Interpretability | |
a. 80% of the saved Exams with image quality ACEP score 3-5 received at least "Medium" image quality by LVivo IQS. | In 85% of the patients with image quality 3-5 by visual estimation, it was possible to obtain at least "Medium" quality score by LVivo IQS. (Meets criteria) |
b. 90% of these cases (from criteria 'a') were clinically interpretable by the majority of three expert cardiologists specializing in echo. | 92% of the above saved clips were clinically interpretable. (Meets criteria) |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Study 1: 100 patient examinations were used for validation. A total of 49,623 frames were analyzed. Data acquisition was done with "different ultrasound devices and various cardiac pathologies." The source country of the data is not explicitly stated, nor is whether the study was retrospective or prospective, though the mention of "already data acquired" suggests a retrospective approach for this specific test.
- Study 2: 182 patients were included in the study. This study involved "data acquired after using the LVivo IQS in real time while scanning the RV from the 4CH apical view." This indicates a prospective data collection directly utilizing the device. The data was gathered in a "Point of Care environment." The country of origin is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Study 1: "Experienced sonographer" (singular, though it might imply multiple, the text only states "sonographer"). No specific number or years of experience are provided, but the term "experienced" implies relevant qualifications.
- Study 2: "Three expert cardiologists specializing in echo." No specific years of experience are listed, but "expert" implies high-level qualification in the field.
-
Adjudication method for the test set:
- Study 1: Not explicitly stated, as it refers to "data tagging by experienced sonographers." This suggests a direct comparison to the sonographer's quality assessment.
- Study 2: For clinical interpretability, the judgment was made "by the majority of three expert cardiologists specializing in echo." This indicates a majority vote (e.g., 2 out of 3 agreement).
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A formal MRMC study comparing human readers with and without AI assistance is not explicitly described. The studies primarily validate the device's standalone performance or its real-time use in conjunction with human actions (Study 2, where doctors documented scores). The studies focus on the device's ability to provide a consistent and clinically relevant quality score, rather than directly measuring reader improvement with AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Study 1 serves as a standalone performance evaluation by comparing the device's output to an experienced sonographer's quality tagging on pre-existing data (i.e., the algorithm processes images and outputs a score, which is then compared).
- Study 2 involves the device being used in "real time while scanning," indicating a human-in-the-loop scenario where the device provides feedback during the acquisition process. However, the evaluation of clinical interpretability after the fact (by the expert cardiologists) can be seen as an assessment of the output generated by the combination of the device and the acquisition process it influenced.
-
The type of ground truth used:
- Study 1: "Quality tagging by experienced sonographers." This is expert consensus/opinion on image quality.
- Study 2:
- For the device's real-time quality score (criteria 'a'), the ground truth was the "visual estimation" via "ACEP score (1-5)" made by the medical doctors during acquisition. This represents user-reported quality.
- For clinical interpretability (criteria 'b'), the ground truth was "clinically interpretable by the majority of three expert cardiologists specializing in echo." This is expert clinical interpretation/consensus.
-
The sample size for the training set:
- The document does not specify the sample size for the training set. It only describes the validation/test sets.
-
How the ground truth for the training set was established:
- The document does not provide information on how the ground truth for the training set was established. It only discusses the ground truth for the validation/test sets. The algorithm is described as "AI based," which implies a training phase, but details are omitted.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).