Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K250120
    Device Name
    GECHO
    Manufacturer
    Date Cleared
    2025-07-14

    (179 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181264

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GECHO is a software package intended to visually assess contrast-enhanced echocardiography for left ventricular function and myocardial blood flow by displaying enhanced images of the heart and Time-To-Replenish images. GECHO is intended for use by a cardiologist.

    GECHO is for use on images of adult patients who underwent contrast-enhanced echocardiography.

    Device Description

    GECHO is an image review platform and analysis software that assists cardiologists in the interpretation of left ventricular function and myocardial blood replenishment from two-dimensional, contrast-enhanced echocardiograms.

    AI/ML Overview

    The FDA 510(k) Clearance Letter for GECHO provides information about the device's intended use and design, but it does not detail specific acceptance criteria or the study results proving the device meets those criteria. The letter primarily focuses on establishing substantial equivalence to a predicate device (QLAB Advanced Quantification Software, K181264) and outlines general regulatory compliance.

    However, based on the information provided in the "510(k) Summary," particularly the "Performance Data" section, we can infer some aspects of the study and the implicit acceptance criteria. The summary explicitly states:

    • "The TTR (Time-to-Replenish) algorithm demonstrated strong performance with these key metrics:
      • RMSE of 0.98 seconds, showing high accuracy
      • Minimal bias of 0.0025 seconds
      • Can detect TTR values down to 0.5 seconds
      • Performs well even with noise (NMSE up to 0.05)"
    • "Additionally, an expert survey was conducted to ensure the TTR image correctly represents the information present in raw images and is intuitive and useful, in combination with raw images, for the interpretation of myocardial blood flow."

    Given these statements, the acceptance criteria would likely be defined by target values for RMSE, bias, minimum detectable TTR, and performance under noise, along with positive feedback from an expert survey regarding the clinical utility and accurate representation of information.

    Let's organize the available and inferred information:


    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Inferred from Reported Performance)Reported Device Performance
    Root Mean Square Error (RMSE) for TTR algorithm0.98 seconds
    Bias for TTR algorithm0.0025 seconds
    Minimum detectable TTR valueCan detect TTR values down to 0.5 seconds
    Performance with noise (Normalized Mean Square Error)Performs well even with noise (NMSE up to 0.05)
    Expert consensus on TTR image representation and utility"Expert survey... ensured the TTR image correctly represents the information present in raw images and is intuitive and useful"

    Study Details

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: The document mentions "synthetic data across a wide range of representative clinical data parameters" was used for technical performance assessment of the TTR algorithm. However, the specific sample size (number of synthetic cases or data points) for this test set is not explicitly stated.
    • Data Provenance: The primary data for technical performance assessment was synthetic data. The document does not specify country of origin for the synthetic data parameters or if any real patient data (even de-identified) influenced the generation of this synthetic data. The study was a technical performance assessment, not a clinical study on real patients, so the terms "retrospective" or "prospective" don't directly apply to the data collection method for the synthetic data itself.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: The document states "an expert survey was conducted." The exact number of experts involved is not specified.
    • Qualifications of Experts: The experts are referred to simply as "cardiologist" in the Indications for Use, and the survey was conducted with "experts." While it's implied they are cardiologists, their specific qualifications (e.g., years of experience, sub-specialty) are not detailed.

    4. Adjudication method for the test set

    • For the quantitative metrics (RMSE, bias, etc.), the "ground truth" was inherently defined by the generation of the synthetic data, which presumably had known TTR values as part of its construction. Therefore, no human adjudication method was needed for these quantitative comparisons.
    • For the "expert survey" regarding image representation and utility, the adjudication method (e.g., consensus, majority vote) is not specified. It simply states the survey "ensured" these aspects.

    5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "No clinical performance data was necessary to claim substantial equivalence." The "expert survey" was qualitative regarding image utility, not a quantitative MRMC study measuring reader performance improvement with assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone algorithm performance assessment was conducted. The "Technical Performance Assessment" section details the algorithm's performance on synthetic data (RMSE, bias, minimum detectable TTR, performance with noise), which is the algorithm's standalone performance.

    7. The type of ground truth used

    • For the quantitative assessment of the TTR algorithm (RMSE, bias, etc.), the ground truth was based on known values embedded within the synthetic data.
    • For the qualitative assessment of image representation and utility, the ground truth was established by expert opinion/consensus via the expert survey.

    8. The sample size for the training set

    • The document does not provide any information regarding the sample size of the training set for the GECHO algorithm.

    9. How the ground truth for the training set was established

    • The document does not provide any information regarding how the ground truth for the training set was established, as details about the training set itself are absent.

    In summary: The FDA 510(k) summary focuses heavily on establishing substantial equivalence based on technical characteristics and functionality with a predicate device, along with verification and validation of software and a technical performance assessment of a key algorithm (TTR) using synthetic data. It explicitly states that no clinical performance data was deemed necessary for this clearance, which means there was no demonstration of human-in-the-loop performance improvement or comparative effectiveness against human readers.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200974
    Manufacturer
    Date Cleared
    2020-06-03

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181264, K150122

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QLAB Advanced Quantification Software is a software application package. It is designed to view and quantify image data acquired on Philips ultrasound systems.

    Device Description

    The Philips QLAB Advanced Quantification Software System (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems.

    The purpose of this Traditional 510(K) Pre-market Notification is to introduce a new to introduce the new 3D Auto MV cardiac quantification application to the Philips QLAB Advanced Quantification Software, which was most recently cleared under K191647. The latest QLAB software version (launching at version 15.0) will include the new Q-App 3D Auto MV, which integrates the segmentation engine of the cleared QLAB HeartModel Q-App (K181264) and the TomTec-Arena 4D MV Assessment application (K150122) thereby providing a dynamic Mitral Valve clinical quantification tool.

    AI/ML Overview

    The document describes the QLAB Advanced Quantification Software System and its new 3D Auto MV cardiac quantification application.

    Here's an analysis of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state acceptance criteria in a quantitative table format (e.g., "accuracy must be > 90%"). Instead, it states that the device was tested to "meet the defined requirements and performance claims." The performance is demonstrated by the non-clinical verification and validation testing, and the 3D Auto MV Algorithm Training and Validation Study.

    The document provides a comparison table (Table 1 on page 6-7) that highlights the features and a technical comparison to predicate devices, but this table does not present quantitative performance against specific acceptance criteria for the new 3D Auto MV feature. It lists parameters that the new application will measure, such as:

    • Saddle Shaped Annulus Area (cm²)
    • Saddle Shaped Annulus Perimeter (cm)
    • Total Open Coaptation Area (cm²)
    • Anterior Closure Line Length (cm)
    • Posterior Closure Line Length (cm)

    However, it does not provide reported performance values for these parameters from the validation study against any predefined acceptance criteria. The statement is that "All other measurements are identical to the predicate 4D MV-Assessment application," implying a level of equivalence, but without specific data.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions that Non-clinical V&V testing also included the 3D Auto MV Algorithm Training and the subsequent Validation Study performed for the proposed 3D Auto MV clinical application. However, it does not specify the sample size used for this validation study (i.e., the test set). The data provenance (e.g., country of origin, retrospective or prospective) is also not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    This information is not provided in the document.

    4. Adjudication Method for the Test Set:

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not indicate that a MRMC comparative effectiveness study was done. It focuses on the software's performance and substantial equivalence to predicate devices, not on how human readers' performance might improve with its assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    The document describes the 3D Auto MV Q-App as a "semi-automatic tool" and states that the "User is able to edit, accept, or reject the initial landmark proposals of the mitral valve anatomical locations." This suggests that a purely standalone (algorithm-only) performance study, without any human-in-the-loop interaction, would not be fully representative of its intended use. The validation study presumably evaluates its performance within this semi-automatic workflow, but specific details are lacking.

    7. Type of Ground Truth Used:

    The document describes the 3D Auto MV application integrating the machine-learning derived segmentation engine of the QLAB HeartModel and the TOMTEC-Arena TTA2 4D MV-Assessment application. The ground truth for the training of the HeartModel (and subsequently the 3D Auto MV) would typically involve expert annotations of anatomical structures. However, the specific type of ground truth used for the validation study mentioned ("3D Auto MV Algorithm Training and the subsequent Validation Study") is not explicitly stated. Given the context of cardiac quantification, it would most likely be based on expert consensus or expert-derived measurements from the imaging data itself.

    8. Sample Size for the Training Set:

    The document mentions "3D Auto MV Algorithm Training" but does not specify the sample size used for the training set.

    9. How the Ground Truth for the Training Set Was Established:

    The document states that the 3D Auto MV Q-App "integrates the segmentation engine of the cleared QLAB HeartModel Q-App (K181264)". For HeartModel, it says: "The HeartModel Q-App provides a semi-automatic 3D anatomical border detection and identification of the heart chambers for the end-diastole (ED) and end-systole (ES) cardiac phases." And for its contour generation: "3D surface model is created semi-automatically without user interaction. User is required to edit, accept, or reject the contours before proceeding with the workflow."

    This implies that the training of the HeartModel's segmentation engine (and inherited by 3D Auto MV) was likely based on expert-derived or expert-validated anatomical annotations/contours, which would have been used to establish the "ground truth" for the machine learning algorithm. However, explicit details on how this ground truth was established for the training data (e.g., number of annotators, their qualifications, adjudication methods) are not provided for this specific submission (K200974). It simply references the cleared HeartModel Q-App (K181264).

    Ask a Question

    Ask a specific question about this device

    K Number
    K200713
    Device Name
    EchoNavigator
    Date Cleared
    2020-04-09

    (22 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181264

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoNavigator supports the interventionalist and surgeon in treatments where both live Echo guidance are used. The targeted patient population consists of patients with cardiovascular diseases requiring such a treatment.

    Device Description

    EchoNavigator is a tool that assists the interventionalist and surgeon with image guidance during treatment of cardiovascular disease for which the procedure uses both live X-ray and live Echo guidance. EchoNavigator can be used with compatible Echo-probes and Echo units in combination with compatible Philips interventional X-ray systems.

    AI/ML Overview

    The provided text does NOT contain information about acceptance criteria or specific study results for the EchoNavigator R3.0.3 device.

    The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting detailed performance studies with acceptance criteria.

    Here's a breakdown of what is and is not in the document regarding your request:

    What is present in the document:

    • Device Name: EchoNavigator R3.0.3
    • Predicate Device: EchoNavigator R1 (K121781)
    • Indications for Use: "EchoNavigator supports the interventionalist and surgeon in treatments where both live X-ray and live Echo guidance are used. The targeted patient population consists of patients with cardiovascular diseases requiring such a treatment."
    • Technological Characteristics Comparison: The document states that the proposed device has similar technological characteristics to the predicate. It notes the ability to display images from live X-ray and Ultrasound, and an unchanged core algorithm for probe detection. It also lists functionalities available in both devices (Synchronize Image Orientation, Multiple Views, Follow C-arm, Table Side Control, Manual Annotations, Image Capture Export). It mentions modifications like an extended "Multiple Views" functionality with a "Model View" and basic Touch Screen Module control for "Table Side Control."
    • Non-Clinical Performance Data: Mentions "Software verification testing" and "Non-clinical in-house simulated use design validation testing" were performed, and all tests passed. It lists several recognized standards that the device complies with (IEC 62304, IEC 62366-1, IEC 82304-1, ISO 14971, ISO 15223-1, UL 2900-1, IEC 80001-1).
    • Conclusion: States that EchoNavigator R3.0.3 is substantially equivalent to the predicate device in terms of indications for use, technological characteristics, safety, and effectiveness, without raising new safety or effectiveness concerns.

    What is NOT present in the document (and therefore cannot be provided in the requested table/answers):

    • Specific Acceptance Criteria: The document does not define quantitative or qualitative acceptance criteria for device performance (e.g., accuracy, precision, sensitivity, specificity, time measurements, etc.). It only states that tests were "passed."
    • Reported Device Performance: No specific performance metrics or values are reported.
    • Sample Size for Test Set: Not mentioned.
    • Data Provenance (country, retrospective/prospective): Not mentioned.
    • Number of Experts and Qualifications for Ground Truth: Not mentioned.
    • Adjudication Method: Not mentioned.
    • MRMC Comparative Effectiveness Study: Not mentioned as being performed.
    • Standalone Performance Study: The document refers to "Software verification testing" and "Non-clinical in-house simulated use design validation testing," which might be considered standalone, but no performance metrics are provided.
    • Type of Ground Truth Used: Not mentioned for any studies.
    • Sample Size for Training Set: Not mentioned (and likely not applicable since it's not described as an AI/ML device that requires a training set in the typical sense; the focus is on software verification and simulated use validation).
    • Ground Truth Establishment for Training Set: Not mentioned.

    In summary, the provided text does not contain the detailed performance data, acceptance criteria, study methodologies (like sample sizes, expert involvement, or adjudication), or ground truth specifics that your request asks for. The document is a regulatory submission focused on substantial equivalence based on non-clinical testing and comparison to a predicate, not a detailed performance study report.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1