Search Results
Found 6 results
510(k) Data Aggregation
(28 days)
QLAB QUANTIFICATION SOFTWARE
QLAB Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips Healthcare ultrasound products.
QLAB Quantification software is available either as a stand-alone product that can function on a standard PC, on board a dedicated workstation, or on-board Philips' ultrasound systems. It can be used by trained healthcare professionals for the on-line and off-line review and quantification of ultrasound studies in healthcare facilities/hospitals. The QLAB Quantification software application package is designed to view and quantify image data acquired on Philips ultrasound products. The four modified plug-ins, a2DO, aCMQ , MVN, and Heart Model are applications within Philips QLAB Quantification software.
The provided document (K132165) describes modifications to existing QLAB Quantification software Q-Apps (a2DQ, aCMQ, MVN, and Heart Model). The submission is a special 510(k) and focuses on demonstrating that the modified software maintains the same level of safety and effectiveness as the predicate (unmodified) versions. It does not present a de novo study to establish new acceptance criteria or full device performance metrics beyond confirming equivalence to the predicate.
Therefore, direct responses to some of your questions, particularly those asking for the reported device performance against acceptance criteria or details of a stand-alone study with specific metrics, are not explicitly provided in this type of submission. The document emphasizes verification and validation against the predicate's established performance, rather than setting and meeting entirely new benchmarks.
Here's an attempt to answer your questions based on the provided text, indicating where information is not explicitly stated for this type of submission:
1. A table of acceptance criteria and the reported device performance
The document states: "Verification and Validation testing concluded that the modified QLAB Q-Apps are safe and effective and introduced no new risks." and "Testing performed demonstrated that the QLAB Quantification software with modified Q-Apps meets all defined reliability requirements and performance claims."
This indicates that the acceptance criteria for this special 510(k) were based on demonstrating equivalence in safety, effectiveness, reliability, and performance to the legally marketed predicate devices. Specific quantitative acceptance criteria (e.g., "EF measurement must be within X% of ground truth") and corresponding reported performance values are not detailed in this summary.
Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|
Maintain safety profile of predicate device | No new risks introduced. |
Maintain effectiveness profile of predicate device | Safe and effective. |
Meet established reliability requirements of predicate device | Meets all defined reliability requirements. |
Meet established performance claims of predicate device | Meets all defined performance claims. |
Support intended use | Does not alter the intended use. |
Not introduce new technological characteristics | Has the same technological characteristics as the legally marketed device. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set used in verification and validation activities, nor does it provide details on the data provenance (country of origin, retrospective/prospective). This level of detail is typically found in the full test reports, not a 510(k) summary for modifications.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number or qualifications of experts used to establish ground truth for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify the adjudication method used for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The focus is on software modifications to existing Q-Apps for improved workflow and semi-automation, not necessarily on a comparative effectiveness study of human readers with vs. without AI assistance. The modifications, particularly for MVN and a2DQ, aim to improve user efficiency and ease of use, which would imply an improvement in human reader efficiency but this is not quantified as an effect size from an MRMC study in this summary.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document repeatedly mentions "semi-automated border detection" (a2DQ), "automatically draw a region of interest" (aCMQ), and "semi-automation for greater efficiency and ease of use" (MVN), with an explicit mention that "The modified Heart Model application allows users to override border placement. The user may edit the border by clicking and dragging the border to the desired location." This strongly suggests that these are human-in-the-loop applications where the software provides automated assistance but the user retains control and the ability to edit. Therefore, a purely standalone (algorithm-only) performance evaluation independent of human interaction is not implied or described by the context of these modifications.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document refers to "Verification and Validation testing" demonstrating that the modified software meets "all defined reliability requirements and performance claims" relative to the "predicate." It does not explicitly state the type of ground truth used. For quantification software in medical imaging, ground truth often involves:
- Manual tracings/measurements by expert cardiologists.
- Correlation with other imaging modalities (e.g., Cardiac MR for Heart Model as mentioned, "Measurements are closely correlated to cardiac MR").
- Possibly phantoms or simulated data for certain aspects.
However, the specific methods are not detailed in this summary.
8. The sample size for the training set
The document does not provide any information regarding the sample size for a training set. As this is a special 510(k) for modifications, it's possible that the modifications primarily involved refinement of existing algorithms and workflows, rather than a complete retraining of an AI model requiring a new, distinct training set.
9. How the ground truth for the training set was established
The document does not provide information on how ground truth for any potential training set was established.
Ask a specific question about this device
(110 days)
QLAB QUANTIFICATION SOFTWARE HEART MODEL
QLAB Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips Healthcare ultrasound products.
The QLAB software application is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems. It can be used for the on-line and off-line review and quantification of ultrasound studies. QLAB Quantification software now includes the Heart Model application. The Heart Model application provides automatic 3D anatomical segmentation and identification of the heart chambers for the End Diastole (ED) and End Systole (ES) cardiac phases. The Heart Model segmentation algorithm draws segmented borders for select standard American Society of Echocardiology (ASE) apical and short axis views. This provides a streamlined workflow for obtaining cardiac 3D quantitative heart chamber measurements and calculated result values.
Here's an analysis of the QLAB Quantification Software with Heart Model based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly state acceptance criteria in terms of specific performance metrics or thresholds. Instead, it mentions that the device was "clinically evaluated and accepted" and that "the LA and LV chamber measurements correlated to cardiac MRI." This suggests that the primary acceptance criterion was a demonstrated correlation with a gold standard (cardiac MRI) for key measurements, and a general confirmation that the device "met its intended use."
Without specific numerical acceptance criteria from the document, we can infer the reported performance based on the general statements:
Acceptance Criteria (Inferred from documentation) | Reported Device Performance |
---|---|
Clinically evaluated and accepted for functionality and performance. | External clinicians evaluated functionality and performance; confirmed Heart Model met its intended use. |
LA and LV chamber measurements correlate to cardiac MRI. | LA and LV chamber measurements correlated to cardiac MRI. |
Safe and effective as predicate devices. | Determined to be as safe and effective as predicate devices. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document does not specify the sample size for the clinical evaluation (external validation) during which external clinicians evaluated the functionality and performance of Heart Model.
- Data Provenance: The document states "External clinicians evaluated the functionality and performance of Heart Model." This suggests prospective data collection during the clinical evaluation, but it doesn't specify the country of origin of the data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Unspecified. The document states "External clinicians evaluated the functionality and performance of Heart Model." This implies multiple clinicians, but a specific number is not given.
- Qualifications of Experts: Unspecified. They are referred to as "external clinicians," but their specific specialties or years of experience are not provided.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1). It states "External clinicians evaluated the functionality and performance of Heart Model," implying a consensus or individual evaluation, but no specific process for resolving discrepancies is detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating the improvement of human readers with AI assistance versus without AI assistance was not explicitly mentioned or described in the provided text. The study focused on the standalone performance of the Heart Model and its correlation with cardiac MRI.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone performance evaluation was done. The document states: "The Heart Model application provides automatic 3D anatomical segmentation and identification of the heart chambers..." and "QLAB Heart Model finds the chambers of the Heart and then displays the Heart Model determined chamber borders in the ASE/ESE views for the user to Accept or Reject." The clinical evaluation compared the device's generated measurements to cardiac MRI, implying an assessment of the algorithm's raw output before user interaction. The statement about "LA and LV chamber measurements correlated to cardiac MRI" refers to the output of the Heart Model algorithm.
7. The Type of Ground Truth Used
The primary type of ground truth used for the comparison was cardiac MRI measurements for Left Atrial (LA) and Left Ventricular (LV) chamber measurements. This is considered a highly reliable imaging modality for cardiac quantification.
8. The Sample Size for the Training Set
The document does not provide the sample size used for the training set. It mentions that the QLAB Heart Model algorithm is "reused from Philips Brilliance CT (K042293), but modified for ultrasound," suggesting prior training on CT data and potential further training/adaptation for ultrasound, but no specifics are given.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established. Given the algorithm's origin from Philips Brilliance CT and adaptation for ultrasound, it's plausible that ground truth was established by expert annotation or comparison to established methods on CT and subsequently on ultrasound data, but this is not explicitly detailed.
Ask a specific question about this device
(16 days)
QLAB QUANTIFICATION SOFTWARE
QLAB Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips Medical Systems ultrasound products.
OLAB Quantification software is available either as a stand-alone product that can function on a standard.PC, on board a dedicated workstation, or on-board Philips' ultrasound systems. It can be used by trained healthcare professionals for the on-line and off-line review and quantification of ultrasound studies in healthcare facilities/hospitals.
The QLAB Quantification software application package is designed to view and quantify image data acquired on Philips ultrasound products. Cardiac Motion Quantification (CMQ) is a plug-in included in Philips QLAB Quantification software.
The CMO plug-in is an application within OLAB intended to provide cardiac motion quantification. OLAB Quantification software is intended for use in healthcare facilities/hospitals by trained healthcare professionals.
OLAB CMO modifications were implemented to provide clients with improved reproducibility and consistency between users, as well as to provide users with a reduction of workflow steps. The modifications described in this Special 510(k) submission do not alter the intended use of the OLAB Quantification software with the CMO plug-in.
Here's a breakdown of the acceptance criteria and study information for the QLAB CMQ Plug-in Modifications, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Modified CMQ vs. Unmodified CMQ) | Reported Device Performance |
---|---|
Improved workflow: fewer mouse clicks for typical assessment | Verification and validation testing concluded the modification achieved this. |
Improved workflow: decreased average time for typical assessment | Verification and validation testing concluded the modification achieved this. |
Decreased intra-observer variability of assessments | Verification and validation testing concluded the modification achieved this. |
Decreased inter-observer variability of assessments | Verification and validation testing concluded the modification achieved this. |
Safe and effective release, no new risks introduced | Verification and validation testing concluded the modification achieved this. |
Meets all defined reliability requirements and performance claims | Testing demonstrated this. |
2. Sample Size and Data Provenance
The document does not explicitly state the sample size used for the test set or the data provenance (e.g., country of origin of the data, retrospective or prospective). It mentions "Philips verification and validation processes" and "system level tests, performance tests, and safety testing from hazard analysis," but lacks specific details on the datasets used in these tests.
3. Number of Experts and Qualifications
The document does not specify the number of experts used to establish the ground truth for the test set or their qualifications. It states that the device "can be used by trained healthcare professionals," implying expert users, but doesn't detail their involvement in the testing.
4. Adjudication Method
The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth or evaluating the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. It focuses on the improvements of the modified CMQ plugin over the unmodified CMQ plugin in terms of workflow and variability, rather than comparing it to human-only performance or quantifying an effect size of human improvement with AI assistance.
6. Standalone (Algorithm Only) Performance
The device itself is a "plug-in" within a larger software suite (QLAB). While it's a "quantification software" that can function "either as a stand-alone product that can function on a standard PC, on board a dedicated workstation, or on-board Philips' ultrasound systems," the performance claims are related to its function within the QLAB environment, and specifically the modifications made to the CMQ plug-in. The provided text doesn't explicitly detail a standalone algorithm-only performance study contrasting its output against a ground truth without human interaction beyond the general claims of decreased intra-observer and inter-observer variability. The "modifications ... were implemented to provide clients with improved reproducibility and consistency between users," which implies human interpretation is still integral, but the device assists this process.
7. Type of Ground Truth
The document explicitly states the intent to "provide clients with improved reproducibility and consistency between users, as well as to provide users with a reduction of workflow steps" and to achieve "Decreased intra-observer variability of assessments; and Decreased inter-observer variability of assessments." This strongly suggests that the ground truth for evaluating these improvements was based on expert consensus or comparative measurements performed by experts, where the goal was to minimize the deviation of measurements between and within experts, facilitated by the software. It does not mention pathology or outcomes data as the primary ground truth for the specific performance claims of this modification.
8. Sample Size for the Training Set
The document does not include any information about the sample size used for a training set. As this is a "Special 510(k) Premarket Notification" for modifications to an already cleared device, it's possible that the core algorithm was trained previously, and this submission focuses on validation of the changes.
9. How Ground Truth for Training Set was Established
Since no training set information is provided, there is no description of how ground truth for a training set was established.
Ask a specific question about this device
(15 days)
QLAB QUANTIFICATION
QLAB Quantification software is a software application package. It is designed to view and quantify image data acquired on Philips Medical Systems ultrasound products.
OLAB version 6.0 adds the OLAB MVO Plug-in and OLAB TMO/TMOA Plug-ins to the cleared QLAB software.
QLAB MVQ Plug-in (MVQ stands for Mitral Valve Quantification) Tool to assess lengths, distances, areas, volumes and angles of mitral valve structures from a Philips Ultrasound system 3D dataset.
QLAB TMQ/TMQA Plug-ins (TMQ and TMQA stands respectively for Tissue Motion Quantification and Tissue Motion Quantification Advanced) TMQ and TMQA plug-ins are an addition to the existing 2DQ plug-in already described on the iE33 510(k) submission K042540. The QLAB TMQ/TMQA plugins provide ultrasound tracking algorithm using pattern-matching method of wall lengthening, shortening and displacement in a set of 2D image. Radial, longitudinal, circumferential, lengths and strains are computed as well as rotation and rotation gradient and absolute speed. TMQ provides only one myocardial layer and 17 segmentation model only; TMQA will provide Multi Myocardial Layers and a choice of Segmentation on and off. CAK (Cardiac Annular Kinesis) feature is an addition to the existing 2DQ plug-in already described on the iE33 510(k) submission K042540. CAK will be a feature of existing 2DQ plug-in as well as TMQ and TMQA plug-ins. CAK is Quick survey tool of global function assessment.
The provided text does not contain specific acceptance criteria or details of a study proving the device meets acceptance criteria.
The document is a 510(k) Premarket Notification for the QLAB Quantification Software. It outlines the administrative information, device description, performance standards (or lack thereof), safety and effectiveness concerns, substantially equivalent devices, and software development processes. It concludes that the software is designed to meet United States and international standards for display and quantification of images acquired on Philips Ultrasound devices and incorporates features of legally marketed devices without raising new safety or effectiveness issues.
Missing information includes:
- Acceptance Criteria and Reported Device Performance: No specific numerical or qualitative acceptance criteria are provided for the QLAB Quantification Software, nor are there any reported performance metrics against such criteria.
- Sample Size and Data Provenance for Test Set: The document does not mention a test set, its sample size, or the provenance of any data used for testing.
- Number and Qualifications of Experts for Ground Truth: There is no information about experts used to establish ground truth.
- Adjudication Method: No adjudication method is described.
- MRMC Comparative Effectiveness Study: The document does not mention an MRMC study or any effect size for human readers with or without AI assistance.
- Standalone Performance Study: No details of a standalone (algorithm-only) performance study are provided.
- Type of Ground Truth Used: The document does not specify the type of ground truth used for any evaluations.
- Sample Size for Training Set: There is no mention of a training set or its sample size.
- How Ground Truth for Training Set was Established: This information is not present.
The document primarily focuses on regulatory compliance, device description, and claims of substantial equivalence to predicate devices, rather than detailed performance study results against specific acceptance criteria.
Ask a specific question about this device
(11 days)
QLAB QUANTIFICATION SOFTWARE
QLAB Quantification software is a software application package. It is designed to view and quantify image data acquired on Philips Medical Systems ultrasound products.
The Parametric Imaging feature is an addition to the existing Cardiac 3DQ Advanced Plug-in already described on the iE33 510(k) submission K042540. The QLAB 5.0 3DQA plug-in Parametric Imaging provides the user with easy-to-use color-coded representations of regional left-ventricular (LV) segmental Timing and Excursion parameters displayed on the standard AHA/ASE 17-segment Bull's-eye display. The Parametric display may be used in assisting the clinician to visualize directly LV regional function in a user-friendly format.
This document does not contain information about acceptance criteria or a study proving device performance against such criteria. It is a 510(k) premarket notification for a software device, and primarily focuses on administrative details, device description, and a claim of substantial equivalence to a predicate device.
Therefore, I cannot populate the table or answer the specific questions regarding acceptance criteria and performance study details. The provided text explicitly states there are "No performance standards for PACS systems or components have been issued under the authority of Section 514" for this device category.
Here's how I would respond if such information were present:
Table of Acceptance Criteria and Reported Device Performance:
Not available in the provided text.
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
Not available in the provided text.
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
Not available in the provided text.
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not available in the provided text.
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not available in the provided text.
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not available in the provided text.
The type of ground truth used (expert concensus, pathology, outcomes data, etc):
Not available in the provided text.
The sample size for the training set:
Not available in the provided text.
How the ground truth for the training set was established:
Not available in the provided text.
Ask a specific question about this device
(15 days)
QLAB QUANTIFICATION SOFTWARE
QLAB Quantification software is a software application package. It is designed to view and quantify image data acquired on Philips Medical Systems ultrasound products.
QLAB version 3.0 adds a Cardiac 3DQ Plug-in (3D viewer with 3D measurements), G1 3D viewer Plug-in and an MVI Plug-in to the cleared QLAB version 2.0.
The Cardiac 3DQ plug-in provides a means of opening. displaying, manipulating and measuring 3D inage files from currently cleared Plilips Ultrasound systems The 3DQ plug-in also allows distance, area, volume and mass measurements from MultiPlanar Reconstruction (MPR) images derived the 3D data sets. The software also provides a means of exporting the data generated by the plug in module in a form accessible to the end user.
Gl (General Imaging) ND Viewer Plug-in reads DICOM compliant liles generated by currently cleared Philips Ultrasound systems. It contains tools for changing 3D volume rendering parameters. The volume rendering is done using the rendering engine shipping with the Boris Platform and Philips HDI 5000. Therefore the main volume rendering controls are the same as on the imaging system.
MVI (Microvascular Imaging) Plug-in reads DICOM compliant files generated by the Philips Boris Platform and the Philips HDI 5000 Platforms. It performs a Maximum Intensity Projection convolution of the cine' information and allows viewing of the processed information. It provides tools for export of the resulting information in a standard AVI tile formal for use in presentations. The processing is accomplished exactly as in the Predicate device (HDI 5000) with the exception that there are no user selectable processing changes possible.
The provided text is a 510(k) Summary for the QLAB Quantification software, which focuses on device description, predicate devices, and general safety and effectiveness concerns. It explicitly states that "No performance standards for PACS systems or components have been issued under the authority of Section 14" and that the software "has been designed to comply with the following voluntary standards: MSDN Microsoft Developer's Network October 2001 and ISO Joint Photographic Experts Group (JPEG) Image Compression Standard."
Crucially, the document does not contain information about specific performance acceptance criteria or a study proving that the device meets such criteria. It mentions "software design, verification and validation testing" and a "risk assessment" but provides no details on the methodologies, results, or ground truth used for these internal processes.
Therefore, I cannot fully answer your request based on the provided text.
Here's a breakdown of what can and cannot be answered:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The document does not specify any quantitative acceptance criteria (e.g., accuracy, sensitivity, specificity, measurement tolerances) or reported device performance metrics against such criteria. It only references compliance with general software development and image compression standards.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. The document does not mention the sample size of any test set used, nor does it specify the provenance, type (retrospective/prospective), or origin of any data used for testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Cannot be provided. The document does not refer to the use of experts for establishing ground truth, nor does it describe their number or qualifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. There is no mention of an adjudication method as no test set details are provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Cannot be provided. The document does not mention any MRMC comparative effectiveness study or any assessment of human reader improvement with or without AI assistance. This device is described as a quantification and viewing software, not specifically an AI-assisted diagnostic tool in the sense of predictive or interpretive algorithms.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Cannot be determined from the text. While the device performs "quantification" and "volume rendering," which are algorithmic tasks, the document does not distinguish between testing done in a standalone manner versus with a human in the loop. The device is fundamentally a workstation for human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Cannot be provided. The document does not specify the type of ground truth used for any testing.
8. The sample size for the training set
- Cannot be provided. The document does not mention a training set or its sample size.
9. How the ground truth for the training set was established
- Cannot be provided. As no training set is mentioned, information on how its ground truth was established is absent.
Ask a specific question about this device
Page 1 of 1