Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K243065

    Validate with FDA (Live)

    Device Name
    Cardiac Guidance
    Date Cleared
    2025-01-15

    (110 days)

    Product Code
    Regulation Number
    892.2100
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    DEN190040, K201992

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cardiac Guidance software is intended to assist medical professionals in the acquisition of cardiac ultrasound images. Cardiac Guidance software is an accessory to compatible general purpose diagnostic ultrasound systems.

    The Cardiac Guidance software is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV). Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).

    Device Description

    The Cardiac Guidance software is a radiological computer-assisted acquisition guidance system that provides real-time guidance during echocardiography to assist the user capture anatomically correct images representing standard 2D echocardiographic diagnostic views and orientations. This Al-powered, software-only device emulates the expertise of skilled sonographers.

    Cardiac Guidance is comprised of several different features that, combined, provide expert guidance to the user. These include:

    • Quality Meter: The real-time feedback from the Quality Meter advises the user on the expected diagnostic quality of the resulting clip, such that the user can make decisions to further optimize the quality, for example by following the prescriptive guidance feature below.
    • Prescriptive Guidance: The prescriptive guidance feature in Cardiac Guidance provides direction to the user to emulate how a sonographer would manipulate the transducer to acquire the optimal view.
    • Auto-Capture: The Cardiac Guidance Auto-Capture feature triggers an automatic capture of a clip when the quality is predicted to be diagnostic, emulating the way in which a sonographer knows when an image is of sufficient quality to be diagnostic and records it.
    • Save Best Clip: This feature continually assesses clip quality while the user is scanning and, in the event that the user is not able to obtain a clip sufficient for Auto-Capture, the software allows the user to retrospectively record the highest quality clip obtained so far, mimicking the choice a sonographer might make when recording an exam.
    AI/ML Overview

    The provided document is a 510(k) summary for Cardiac Guidance software, which is a radiological computer-assisted acquisition guidance system. It discusses an updated Predetermined Change Control Plan (PCCP) and addresses how future modifications will be validated. However, it does not contain a detailed performance study with specific acceptance criteria and results from such a study for the current submission.

    The document focuses on the plan for future modifications and ensuring substantial equivalence through predefined testing. While it mentions that "Safety and performance of the Cardiac Guidance software will be evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing outlined in the submission," and "The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria," the specific details of a study proving the device meets acceptance criteria are not included in this document.

    Therefore, the following information cannot be fully extracted based solely on the provided text:

    • A table of acceptance criteria and reported device performance (for the current submission/PCCP update).
    • Sample size used for the test set and data provenance.
    • Number of experts and their qualifications for establishing ground truth for the test set.
    • Adjudication method for the test set.
    • Results of a multi-reader multi-case (MRMC) comparative effectiveness study, including effect size.
    • Details of a standalone (algorithm only) performance study.
    • The type of ground truth used.
    • Sample size for the training set.
    • How the ground truth for the training set was established.

    However, the document does contain information about performance testing and acceptance criteria for future modifications under the PCCP.

    Here's a summary of what can be extracted or inferred regarding performance and validation, specifically related to the plan for demonstrating that future modifications will meet acceptance criteria:


    1. A table of Acceptance Criteria and the Reported Device Performance:

    The document describes the types of testing and the intent to use acceptance criteria for future modifications. It does not provide a table of acceptance criteria and reported device performance for the current submission or previous clearances. It states:

    "The test methods specified in the PCCP establish substantial equivalence to the predicate device, and include sample size determination, analysis methods, and acceptance criteria."

    This indicates that acceptance criteria will be defined for future validation tests, but they are not listed here. The document focuses on the types of modifications and the high-level testing methods:

    Modification CategoryTesting Methods Summary
    Retraining/optimization/modification of core algorithm(s)Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met.
    Real-time guidance for additional 2D TTE viewsRepeating verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met for the additional views.
    Optimization of the core algorithm(s) implementation (thresholds, averaging logic, transfer functions, frequency, refresh rate)Repeating relevant verification test(s) and the system level validation test to ensure the pre-defined acceptance criteria are met.
    Addition of new types of prescriptive guidance (patient positioning, breathing guidance, combined probe movements, pressure, sliding/angling) and addition of existing guidance types to all viewsRepeating relevant verification tests and two system level validation tests, including usability testing, to ensure the pre-defined acceptance criteria are met.
    Labeling compatibility with various screen sizes (including mobile) and UI/UX changes (e.g., audio, configurability of guidance)Repeating relevant verification tests and the system level validation test, including usability testing, to ensure the pre-defined acceptance criteria are met.

    2. Sample size used for the test set and the data provenance:

    The document states:

    "To ensure validation test datasets are representative of the intended use population, each will meet minimum demographic requirements."

    However, specific sample sizes and data provenance (e.g., country of origin, retrospective/prospective) for any performance study are not provided in this document. It only refers to "sample size determination" as being included in the test methods for the PCCP.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

    The document refers to a "Non-expert Validation" being added to the subject PCCP, which was "Not included" in the K201992 PCCP. It describes this as:

    "Adds standalone test protocol to enable validation of modified device performance by the intended user groups, ensuring equivalency to the original device based on predefined clinical endpoints."

    While this suggests a study involving users, it does not explicitly state it's an MRMC comparative effectiveness study comparing human readers with and without AI assistance, nor does it provide any effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The document's "Testing Methods" column frequently mentions "Repeating verification tests and the system level validation test to ensure the pre-defined acceptance criteria are met." This suggests that standalone algorithm performance testing (verification and system-level validation) is part of the plan for future modifications. However, specific details of such a study are not provided in this document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    This information is not explicitly stated in the document. The "Non-expert Validation" mentions "predefined clinical endpoints," but the source of the ground truth for those endpoints is not detailed.

    8. The sample size for the training set:

    This information is not provided in the document.

    9. How the ground truth for the training set was established:

    This information is not provided in the document. The document mentions "Retraining/optimization/modification of core algorithm(s)" and that "The modification protocol incorporates impact assessment considerations and specifies requirements for data management, including data sources, collection, storage, and sequestration, as well as documentation and data segregation/re-use practices," implying a training set exists, but details on ground truth establishment are missing.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240953

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2024-08-05

    (119 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    DEN190040, K232501

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Al Platform 2.0 is intended for noninvasive processing of ultrasound images to detect, measure, and calculate relevant medical parameters of structures and function of patients with suspected disease. In addition, it can provide Quality Score feedback to assist healthcare professionals, trained and qualified to conduct echocardiography and lung ultrasound scans in the current standard of care while acquiring ultrasound images. The device is intended to be used on images of adult patients.

    Device Description

    Exo Al Platform 2.0 (AIP 2.0) is a software as a medical device (SaMD) that helps qualified users with image-based assessment of ultrasound examinations in adult patients. It is designed to simplify workflow by helping trained healthcare providers evaluate, quantify, and generate reports for ultrasound images. AIP 2.0 takes as an input in the Digital Imaging and Communications in Medicine (DICOM) format from ultrasound scanners of a specific range and allows users to detect, measure, and calculate relevant medical parameters of structures and function of patients with suspected disease. In addition, it provides frame and clip quality score in real-time for the Left Ventricle from the four-chamber apical and parasternal long axis views of the heart and lung scans. In addition, the Al modules are provided as a software component to be integrated by another computer programmer into their legally marketed ultrasound imaging device. Essentially, the Algorithm and API, which are modules, are medical device accessories.

    Key features of the software are

    • Lung Al: An Al-assisted tool for suggesting the presence of lung structures and artifacts on ultrasound images, namely A-lines. Additionally, a per-frame and per-clip quality score is generated for each lung scan.
    • Cardiac Al: An Al-assisted tool for the quantification of Left Ventricular Ejection Fraction (LVEF), Myocardium wall thickness (Interventricular Septum (IVSd), Posterior wall (PWd)), and IVC diameter on cardiac ultrasound images. Additionally, a per-frame and per-clip quality score is generated for each Apical and PLAX cardiac scan.
    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device, AI Platform 2.0 (AIP002), meets these criteria for specific functionalities. This device is a software as a medical device (SaMD) intended for processing ultrasound images for adult patients, including detecting, measuring, and calculating medical parameters, and providing quality score feedback during image acquisition.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The document specifies performance metrics for two main functionalities tested: Left Ventricle Wall Thickness and Inferior Vena Cava (IVC) measurements, and Quality AI (for frames and clips). The acceptance criteria are implicitly high correlation with expert measurements, indicated by high Interclass Correlation (ICC) values.

    Functionality/MeasurementAcceptance Criteria (Implicit)Reported Device Performance (ICC with 95% CI)
    LV Wall ThicknessHigh correlation with experts
    InterVentricular Septum (IVSd)0.93 (0.89 – 0.96)
    Posterior Wall (PWd)0.94 (0.89 – 0.97)
    Inferior Vena Cava (IVC)High correlation with experts
    IVC Dmin0.93 (0.90 – 0.95)
    IVC Dmax0.94 (0.90 – 0.96)
    Quality AIHigh agreement with experts
    Overall agreement (frames)0.94 (0.94 – 0.95)
    Overall agreement (clips)0.94 (0.92 – 0.95)
    Diagnostic Classification>95% agreement with experts (ACEP score >=3)98.3% of clips rated ACEP >=3 by experts received at least "Minimum criteria met for diagnosis" by Clip Quality AI. 98.0% of scans considered "Minimal criteria met for diagnosis" or "good" by Quality AI were deemed diagnostic by experts (ACEP score of 3 or higher).

    2. Sample size used for the test set and the data provenance

    • LV Wall Thickness and IVC measurements: 100 subjects.
    • Quality AI (Section a): 184 patients, resulting in 226 clips (29,732 frames).
    • Quality AI (Section b, real-time scanning): 396 lung and cardiac scans.
    • Data Provenance: The test data encompassed diverse demographic variables (gender, age, ethnicity) from multiple sites in metropolitan cities with diverse racial patient populations. The text states the data was entirely separated from the training/tuning datasets. The studies were retrospective for the initial quality evaluation (comparing to previously acquired data rated by sonographers) and prospective for the real-time quality AI evaluation (data acquired while using the AI in real-time by users with varying experience).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • LV Wall Thickness and IVC measurements: Ground truth was established as the average measurement of three experts. Their specific qualifications (e.g., years of experience, specialty) are not explicitly stated beyond "experts."
    • Quality AI (Section a): Ground truth was established by "experienced sonographers." Their number and specific qualifications are not detailed beyond "experienced."
    • Quality AI (Section b, real-time scanning): Ground truth for diagnostic classification was established by "expert readers" (ACEP score of 3 or above). Their number and specific qualifications are not detailed beyond "expert readers."

    4. Adjudication method for the test set

    • LV Wall Thickness and IVC measurements: The adjudication method was taking the average measurement of three experts. This implies a form of consensus or central tendency for ground truth.
    • Quality AI (Section a): Ground truth was based on "quality rating by experienced sonographers on each frame and the entire clip." It doesn't explicitly state an adjudication method beyond this, implying individual expert ratings were used or a single consensus was reached, but not a specific multi-reader adjudication process like 2+1 or 3+1.
    • Quality AI (Section b): Ground truth was based on "ACEP quality of 3 or above by expert readers." Similar to Section a, a specific adjudication method beyond "expert readers" is not detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not explicitly describe a traditional MRMC comparative effectiveness study that directly quantifies the improvement of human readers with AI assistance versus without AI assistance.

    The Quality AI section (b) indicates that 26 users (including 18 novice users) conducted 396 lung and cardiac scans using the real-time quality AI feedback. This suggests an evaluation of the AI's ability to guide users to acquire diagnostic quality images, which is an indirect measure of assisting human performance. However, it does not provide an effect size of how much human readers improve in their interpretation or diagnosis with AI assistance. The study focuses on the AI's ability to help users acquire diagnostic quality images.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, standalone performance was evaluated for the following:

    • Left Ventricle Wall Thickness and IVC measurements: The performance (ICC) was calculated directly between the AI's measurements and the expert-derived ground truth. This is a standalone performance metric.
    • Quality AI (Section a): The overall agreement (ICC) between the Quality AI and quality ratings by experienced sonographers was calculated. This also represents standalone performance of the AI's quality assessment function.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used for the evaluated functionalities was expert consensus/measurement:

    • LV Wall Thickness and IVC measurements: Average measurement of three experts.
    • Quality AI: Quality ratings by experienced sonographers (Section a) and ACEP quality scores by expert readers (Section b).

    No mention of pathology or outcomes data as ground truth.

    8. The sample size for the training set

    The document explicitly states: "The test data was entirely separated from the training/tuning datasets and was not used for any part of the training/tuning." However, it does not provide the specific sample size for the training set.

    9. How the ground truth for the training set was established

    The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the AI models use "non-adaptive machine learning algorithms trained with clinical data." The Predetermined Change Control Plan also refers to "new training data" and augmenting the training dataset, but without details on ground truth establishment for these training datasets.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1