(26 days)
QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips Healthcare Ultrasound systems.
Philips QLAB Advanced Quantification software (OLAB) is designed to view and quantify image data acquired on Philips ultrasound products. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems. It can be used for the off-line review and quantification of ultrasound studies.
QLAB software provides basic and advanced quantification capabilities across a family of PC and cart based platforms. QLAB software functions through Q-App modules, each of which provides specific capabilities.
The provided FDA 510(k) summary for Philips' QLAB Advanced Quantification Software (K171314) focuses on modifications to existing Q-Apps (a2DQ and aCMQ/CMQ Stress) and primarily addresses software verification and validation, rather than a clinical study establishing new acceptance criteria or device performance through a comparative effectiveness study.
Therefore, much of the requested information (such as specific performance metrics, sample sizes for test sets, expert qualifications, adjudication methods, and MRMC study details) is not explicitly detailed in this document in the typical format of a clinical performance study. The document emphasizes equivalence to a predicate device and internal testing.
However, based on the provided text, here's an attempt to answer the questions, highlighting where information is not available:
Acceptance Criteria and Device Performance Study Details
1. Table of Acceptance Criteria and Reported Device Performance
The document does not specify quantitative acceptance criteria or a "reported device performance" in terms of clinical metrics (e.g., sensitivity, specificity, accuracy) from a comparative study. Instead, the acceptance is based on the device meeting its defined requirements and performance claims during internal software verification and validation.
Acceptance Criteria (Implied from the document): The modified QLAB a2DQ and aCMQ/CMQ Stress Q-Apps are safe and effective and introduce no new risks, meeting defined requirements and performance claims validated through internal processes.
Reported Device Performance:
- The modifications to the a2DQ and aCMQ/CMQ Stress Q-Apps were tested in accordance with Philips internal processes.
- Verification and software validation data support the proposed modified QLAB a2DQ/aCMQ/CMQ Stress software relative to the currently marketed unmodified QLAB software.
- Testing demonstrated that the proposed QLAB Advanced Quantification Software, with modified Q-Apps, meets defined requirements and performance claims.
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified. The document refers to "software verification and validation data," but does not provide details on the number of cases or images used in this testing.
- Data Provenance: Not specified. It only mentions "Philips internal processes" for testing. Specifics like country of origin or retrospective/prospective nature of data are not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided. The document focuses on software validation and does not detail an expert-based ground truth establishment process for a clinical test set.
4. Adjudication method for the test set
- This information is not provided, as the nature of the "test set" described is for software verification/validation rather than a clinical adjudication process.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study is not mentioned in this document. The submission focuses on device equivalence and software modifications, not an assessment of human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that "Software Verification and Validation testing" was performed for the algorithms. While it doesn't explicitly state "standalone performance study," the entire context of "software only device" and "modification to software application package" suggests that the functional testing would inherently be of the algorithm's performance. However, there are no specific performance metrics (e.g., accuracy, precision) reported for this standalone performance.
7. The type of ground truth used
- The document does not explicitly state the type of "ground truth" in a clinical sense (e.g., pathology, outcomes data, expert consensus). Given it's a software modification submission, the "ground truth" for validation would likely be based on established reference values or measurements within the existing QLAB system, against which the modified algorithms were compared for consistent and accurate computation in "Requirements Review," "Design Review," "Risk Management," and "Software Verification and Validation" activities.
8. The sample size for the training set
- Not applicable/Not specified. The document describes modifications to existing software ("QLAB builds upon a simple and thoroughly modular design"). It does not describe the development of a de novo AI algorithm that would typically involve a separate "training set." The focus is on the verification of modified functionalities within an existing proven system.
9. How the ground truth for the training set was established
- Not applicable/Not specified, as no training set for a new AI algorithm is discussed.
Summary of Document Focus:
This FDA 510(k) summary is for a software modification to an existing device (QLAB Advanced Quantification Software). The primary goal is to demonstrate "substantial equivalence" to a predicate device and to show that the modifications do not introduce new safety or effectiveness risks. The "study" referenced is internal software verification and validation, not a clinical trial or comparative effectiveness study. Therefore, the details requested for clinical performance metrics, reader studies, and explicit ground truth establishment for clinical data sets are largely absent from this particular type of submission.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).