(183 days)
QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips ultrasound systems.
Philips QLAB Advanced Quantification software (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems.
The subject QLAB 3D Auto RV application integrates the segmentation engine of the cleared QLAB HeartModel (K181264) and the TomTec-Arena 4D RV-function (cleared under K150122) thereby providing a dynamic Right Ventricle clinical functionality. The proposed 3D Auto RV application is based on the automatic segmentation technology of HeartModel applied to the Right Ventricle, and uses machine learning algorithms to identify the endocardial contours of the Right Ventricle.
Here's a summary of the acceptance criteria and the study details for the QLAB Advanced Quantification Software 13.0, specifically for its 3D Auto RV application:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria | Reported Device Performance (3D Auto RV vs. predicate 4D RV) | Reported Device Performance (3D Auto RV vs. CMR) |
---|---|---|---|
RV End Diastolic Volume Error Rate | Below 15% (compared to predicate) | Below 15% | Less than 15% difference |
RV End Diastolic Volume (RMSE) | Not explicitly stated as an independent acceptance criterion, but part of validation. | 8.3 ml RMSE | Not explicitly reported for this metric |
RV End Systolic Volume (RMSE) | Not explicitly stated as an independent acceptance criterion, but part of validation. | 2.7 ml RMSE | Not explicitly reported for this metric |
RV Ejection Fraction (RMSE) | Not explicitly stated as an independent acceptance criterion, but part of validation. | 2.7% RMSE | Not explicitly reported for this metric |
User Ability to Discern and Revise | Healthcare professional able to successfully determine when contours require revision and capable of revising. | Users were able to discern which images needed manual editing on all cases. | Not explicitly reported for this metric |
Accuracy and Reproducibility (External Study) | Not explicitly stated as a numerical acceptance criterion, but "accurate and highly reproducible" | Accurate and highly reproducible. No revision needed in 1/3 of patients, minor revisions in the rest. | Less than 15% difference (for RV volume) |
2. Sample Size and Data Provenance
- Test Set Sample Size: Not explicitly stated for either the internal validation study or the external published study.
- Data Provenance:
- Internal Validation Study: "Test datasets were segregated from training data sets." No explicit country of origin is mentioned. It is implied to be retrospective as it uses "data sets."
- External Published Study: Not specified, but it's an "external study published in the Journal of the American Society of Echocardiography."
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Internal Validation Study: Not specified. However, the comparison is primarily against a "predicate 4D RV" which would have its own established methodology. The "healthcare professional" is mentioned in the context of user evaluation.
- External Published Study: Not specified. The ground truth method is cross-modality CMR, implying a reference standard rather than expert consensus on the test images themselves.
4. Adjudication Method (Test Set)
- Internal Validation Study: Not explicitly stated. The comparison is against the predicate device's measurements.
- External Published Study: Not explicitly stated. Ground truth was established by cross-modality Cardiac Magnetic Resonance (CMR).
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- The document does not explicitly describe a formal MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The text mentions that "the healthcare professional was able to successfully determine which contours required revision and was capable of revising," which suggests a human-in-the-loop scenario, but a comparative effectiveness study with effect size is not reported.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance evaluation of the algorithm is implied. The internal validation study reports "RV end diastolic volume error rates below 15% for every data set tested compared to the predicate 4D RV," and RMSE values for volume and EF. The external study also reports the 3D Auto RV's performance against CMR. While user interaction for editing is a feature, the initial segmentation engine and its quantification are evaluated in a standalone manner before potential revision.
7. Type of Ground Truth Used
- Internal Validation Study: The primary comparison for quantitative metrics (volumes, EF) is against the "predicate 4D RV" (TomTec-Arena 4D RV-function, K150122). This suggests the predicate's measurements served as a reference.
- External Published Study: Cross-modality Cardiac Magnetic Resonance (CMR) was considered the gold standard ("Ground truth in this study was considered to be the cross-modality CMR").
8. Sample Size for the Training Set
- Not explicitly stated for the machine learning algorithm. The document only mentions that "Test datasets were segregated from training data sets."
9. How Ground Truth for the Training Set Was Established
- Not explicitly detailed. The device description states the 3D Auto RV application "uses machine learning algorithms to identify the endocardial contours of the Right Ventricle." It also mentions "Algorithm Training procedure is same between the subject and the predicate HeartModel." For HeartModel (the segmentation engine's predecessor for LV), expert-defined contours on extensive datasets would typically be used for training, but this is not explicitly stated for the RV training.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).