K Number
K242807
Manufacturer
Date Cleared
2025-04-04

(199 days)

Product Code
Regulation Number
892.2100
Panel
RA
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The HeartFocus software is intended to assist and guide medical professionals in the acquisition of cardiac ultrasound images. HeartFocus software is an accessory to compatible general-purpose diagnostic ultrasound systems. Heartfocus guides the acquisition of two-dimensional transthoracic echocardiography (2D-TTE). Specifically, in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV), Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (A4C), Apical 5-Chamber (A5C), Apical 2-Chamber (A2C), Apical 3-Chamber (A3C), Subcostal 4-Chamber (SC-4C), and Subcostal Inferior Vena Cava (SC-IVC).

HeartFocus software is indicated for use in adult patients who require a cardiac ultrasound exam.

Device Description

The HeartFocus software is a radiological computer-assisted acquisition guidance system that provides real-time user guidance during echocardiography to assist the user in acquiring anatomically standard diagnostic-quality 2D echocardiographic views. HeartFocus software is an accessory to compatible general-purpose diagnostic ultrasound systems. HeartFocus is intended to be used by medical professionals who have received appropriate training on ultrasound basics and training on using the HeartFocus software, provided by either DESKi or by a trained medical professional while using approved training materials.

It supports the acquisitions of 10 echocardiographic views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV), Parasternal Short-Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (A4C), Apical 5-Chamber (A5C), Apical 2-Chamber (A2C), Apical 3-Chamber (A3C), Subcostal 4-Chamber (SC-4C), and Subcostal Inferior Vena Cava (SC-IVC).

To allow the acquisition of these views, HeartFocus can connect to an ultrasound system, allowing it to receive a stream of ultrasound images.

The standard views acquired with HeartFocus may be assessed by qualified medical professionals to support their decision-making regarding patient care. The collected exams can be transferred notably using DICOM protocols.

HeartFocus is an application that operates entirely offline, without requiring a cloud server to provide its functionalities. All collected medical data is stored locally on the tablet. This data is never transferred to a server controlled by DESKi.

It proposes 4 major functionalities to assist healthcare professionals in the acquisition of cardiac ultrasound images: Live guidance, Diagnostic-quality view detection, Auto record, and Best-effort record.

AI/ML Overview

Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA clearance letter:

HeartFocus (V.1.1.1) Acceptance Criteria and Performance Study

1. Acceptance Criteria and Reported Device Performance

The acceptance criteria are primarily defined by the "Algorithms Performance Testing" section and the "Primary objectives, endpoints, and success criteria" tables (Table 2).

Table of Acceptance Criteria and Reported Device Performance:

FeatureAI/ML AlgorithmObjectiveEndpointAcceptance CriteriaReported Device Performance (Lower bound of 95% CI)Reported Device Performance (Point Estimate - where available)Meets Criteria?
Diagnostic-quality view detectionView ClassificationAbility to classify ultrasound images with similar accuracy as expertsCohen's kappa score between the model's predictions and the ground truth labels made by experts (by frame)> 0.6 (substantial agreement)0.699 to 0.873Not explicitly stated for point estimate, but all lower bounds are > 0.6Yes
Live guidanceGuidanceAbility to provide successful guidance cues on ultrasound framesPositive predictive value of successful guidance cues (by frame)> 0.80.810 to 0.953Not explicitly stated for point estimate, but all lower bounds are > 0.8Yes
Auto recordView Classification + RecordingAbility to save high-quality records according to expertsPositive predictive value of high-quality records among auto-records (by clip)> 0.6 AND point estimate > 0.80.846 to 1.0000.997 (Auto record only)Yes

(Note: The document also states a PPV range of 0.816 to 1.000 for Auto record and Best-Effort record combined, with no explicit point estimate for this combined metric but satisfying the lower bound criteria.)

Studies Proving Acceptance Criteria are Met:

The device's performance against these criteria was primarily demonstrated through "Algorithms Performance Testing" and a "Clinical Study."

2. Sample Size for Test Set and Data Provenance

  • Algorithms Performance Testing (Test Set):

    • Total Patients: 290 patients
    • Total Ultrasound Images: 361,104 ultrasound images
    • Additional retrospective evaluation data: 240 patients (120 US + 120 non-US) from the clinical trial.
    • Specific breakdown for "Diagnostic-quality view detection": 30,361 images from 14 patients.
    • Specific breakdown for "Live guidance": 270,582 images from 20 patients.
    • Specific breakdown for "Auto record and Best-effort record": 211 long-duration clips from 34 patients.
    • Data Provenance: Not explicitly stated for the "Algorithms Performance Testing" subset, but it is mentioned that "Both training/tuning data and test data were collected on patients of varying body mass index (BMI), age, and sex."
  • Clinical Study (Test Set for Clinical Utility):

    • Total Patients: 240 adults (120 patients at Site 1, France; 120 patients at Site 2, USA).
    • Data Provenance: Prospective multicentric clinical study. Site 1 in France, Site 2 in the USA.

3. Number of Experts and Qualifications for Ground Truth

  • Algorithms Performance Testing (Diagnostic-quality view detection):

    • Number of Experts: At least 2 (an expert annotator and an expert reviewer), with a third expert for disagreement resolution.
    • Qualifications: "Experts (cardiologists and/or experienced sonographers)." Specific experience level (e.g., 10 years) is not provided.
  • Clinical Study (Image Quality Assessment and Clinical Decision Making):

    • Number of Experts: A panel of five (5) expert cardiologist readers.
    • Qualifications: "Expert cardiologist readers." Specific experience level is not provided.

4. Adjudication Method for the Test Set

  • Algorithms Performance Testing (Diagnostic-quality view detection):

    • Method: First annotation by an expert annotator, reviewed by a second expert reviewer. Disagreements were resolved either through direct reconciliation by the two experts or by a third expert. This is a form of 2+1 adjudication (or 2-expert consensus with 3rd for tie-breaking).
  • Clinical Study (Image Quality Assessment and Clinical Decision Making):

    • Method: Five (5) expert cardiologist readers independently provided assessments. The results from this panel were then used for statistical analysis. It doesn't explicitly state a consensus-based adjudication method for individual cases among the 5 readers, but rather that their independent assessments were analyzed.

5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

  • Was an MRMC study done? Yes, a clinical study was conducted comparing the performance of Registered Nurses (novice users with AI assistance) to trained medical professionals (sonographers or cardiologists without AI assistance).

  • Effect Size (Improvement with AI vs. without AI assistance):
    The study assessed the ability of RNs using HeartFocus (AI assistance) to acquire echocardiographic exams, and compared the diagnostic informativeness of those exams to those acquired by trained medical professionals without cardiac guidance (without AI assistance).

    The primary endpoints showed extremely high performance for the RNs with HeartFocus:

    • Qualitative Visual Assessment of LV Size: 100% diagnostic quality
    • Qualitative Visual Assessment of LV Function: 100% diagnostic quality
    • Qualitative Visual Assessment of RV Size: 100% diagnostic quality
    • Qualitative Visual Assessment of Non-Trivial Pericardial Effusion: 100% diagnostic quality

    Additionally, the "proportion of scans in which the diagnostic decision was the same between study (RN with HeartFocus) and control (trained medical professional without AI) exams was very high."

    • For primary clinical parameters: 87.5% to 98.3% agreement.
    • For secondary clinical parameters: 87.1% to 99.6% agreement.

    While an explicit "effect size of how much human readers improve with AI vs. without AI assistance" is not quantified in the typical MRMC statistical metrics (e.g., AUC difference), the clinical utility demonstrated is that novice users (RNs) employing HeartFocus can acquire diagnostic-quality cardiac ultrasound exams that lead to highly concordant clinical assessments with those performed by trained medical professionals without AI assistance. This implies a significant enablement effect for less-trained users.

6. Standalone (Algorithm Only) Performance

  • Was a standalone performance study done? Yes, the "Algorithms Performance Testing" section describes the performance of the AI/ML algorithms independently. This includes:
    • Diagnostic-quality view detection: Cohen's kappa score between model predictions and expert ground truth.
    • Live guidance: Positive predictive value of successful guidance cues from the model.
    • Auto record: Positive predictive value of high-quality records generated by the recording algorithms.

These demonstrate the algorithm's performance separate from human interaction, although the features are designed to assist humans.

7. Type of Ground Truth Used

  • Algorithms Performance Testing:

    • Diagnostic-quality view detection: Expert consensus (between 2-3 cardiologists/sonographers).
    • Live guidance: Based on achieving a position closer to the target position (where diagnostic-quality frames are captured), which implicitly relies on the definition of "diagnostic-quality" also established by experts.
    • Auto record: "High-quality records according to experts."
  • Clinical Study:

    • Image Quality and Clinical Assessments: Expert opinion/consensus from a panel of five (5) expert cardiologist readers. They assessed "sufficient information to assess 12 clinical parameters" and "diagnostic image quality per clip... using the ACEP scale," and made qualitative/quantitative assessments of various cardiac parameters. This is primarily expert consensus.

8. Sample Size for the Training Set

  • Total Patients for Training and Tuning: 1,483 patients.
  • Total Ultrasound Images for Training and Tuning: 1,204,113 ultrasound images.

9. How Ground Truth for Training Set Was Established

The document states that AI/ML algorithms "were trained and tuned on 1,483 patients and 1,204,113 ultrasound images." While it doesn't explicitly detail the method for ground truth establishment for the training set, it is highly implied to be similar to the test set ground truth: expert annotation and review/consensus by cardiologists and/or experienced sonographers for classifying images, identifying diagnostic quality, and defining target probe positions, as this is how the test set ground truth was established to compare against.

892.2100 Radiological acquisition and/or optimization guidance system.

892.2100 Radiological acquisition and/or optimization guidance system.

(a)
Identification. A radiological acquisition and/or optimization guidance system is a device that is intended to aid in the acquisition and/or optimization of images and/or diagnostic signals. The device interfaces with the acquisition system, analyzes its output, and provides guidance and/or feedback to the operator for improving image and/or signal quality.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed, technical device description, including a detailed description of the impact of any software and hardware on the device's functions, the associated capabilities and limitations of each part, and the associated inputs and outputs.
(ii) A detailed, technical report on the non-clinical performance testing of the subject device in the intended use environments, using relevant consensus standards when applicable.
(iii) A detailed report on the clinical performance testing, obtained from either clinical testing, accepted virtual/physical systems designed to capture clinical variability, comparison to a closely-related device with established clinical performance, or other sources that are justified appropriately. The choice of the method must be justified given the risk of the device and the general acceptance of the test methods. The report must include the following:
(A) A thorough description of the testing protocol(s).
(B) A thorough, quantitative evaluation of the diagnostic utility and quality of images/data acquired, or optimized, using the device.
(C) A thorough, quantitative evaluation of the performance in a representative user population and patient population, under anticipated conditions and environments of use.
(D) A thorough discussion on the generalizability of the clinical performance testing results.
(E) A thorough discussion on use-related risk analysis/human factors data.
(iv) A detailed protocol that describes, in the event of a future change, the level of change in the device technical specifications or indications for use at which the change or changes could significantly affect the safety or effectiveness of the device and the risks posed by these changes. The assessment metrics, acceptance criteria, and analytical methods used for the performance testing of changes that are within the scope of the protocol must be included.
(v) Documentation of an appropriate training program, including instructions on how to acquire and process quality images and video clips, and a report on usability testing demonstrating the effectiveness of that training program on user performance, including acquiring and processing quality images.
(2) The labeling required under § 801.109(c) of this chapter must include:
(i) A detailed description of the device, including information on all required and/or compatible parts.
(ii) A detailed description of the patient population for which the device is indicated for use.
(iii) A detailed description of the intended user population, and the recommended user training.
(iv) Detailed instructions for use, including the information provided in the training program used to meet the requirements of paragraph (b)(1)(iv) of this section.
(v) A warning that the images and data acquired using the device are to be interpreted only by qualified medical professionals.
(vi) A detailed summary of the reports required under paragraphs (b)(1)(ii) and (iii) of this section.
(vii) A statement on upholding the As Low As Reasonably Achievable (ALARA) principle with a discussion on the associated device controls/options.