K Number
K251456
Manufacturer
Date Cleared
2025-06-05

(24 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The BrightHeart View Classifier device is intended to analyze fetal 2D ultrasound images and video clips using machine learning techniques to automatically detect standard views during fetal heart scanning.

The BrightHeart View Classifier device is intended to be used as an adjunct to the acquisition and interpretation of fetal anatomic ultrasound examinations at the second or third trimester of pregnancy performed with transabdominal probes.

Device Description

BrightHeart View Classifier is a cloud-based software-only device which uses artificial intelligence (AI) to detect standard views during fetal heart scanning in fetal ultrasound images and video clips.

BrightHeart View Classifier is intended to be used by qualified, trained healthcare professional personnel in a professional prenatal ultrasound (US) imaging environment (this includes sonographers, MFMs, OB/GYN, and Fetal surgeons), to help fetal ultrasound examination acquisition and interpretation of 2D grayscale ultrasound by providing automatic classification of video clips and images into standard views, by automatically extracting example frames of standard views from video clips, and by automatically assessing whether the documentation of each standard view in video clips and images satisfies an acquisition protocol defined by the center. Annotated DICOM files generated by the device cannot be modified by the user.

AI/ML Overview

Here's a detailed breakdown of the acceptance criteria and the study proving the BrightHeart View Classifier device meets them, based on the provided FDA 510(k) clearance letter:


1. Table of Acceptance Criteria and Reported Device Performance

The provided document does not explicitly state "acceptance criteria" as a set of predefined thresholds. However, it does present objective performance metrics derived from a validation study. For the purpose of this response, we will consider the reported performance metrics as demonstrative of meeting implicit acceptance criteria for clinical utility and safety, especially since the submission states the device "is as safe and effective as the predicate device and supports a determination of substantial equivalence."

MetricAcceptance Criteria (Implicit)Reported Device Performance
Mean Standard View Recognition SensitivityHigh sensitivity for detecting standard views, indicating a low rate of missed standard views. (Implicitly, the reported value was deemed sufficient for clearance given its comparison to the predicate, which shares the exact same algorithm).0.939 (95% CI: 0.917 – 0.960)
Mean Standard View Recognition SpecificityHigh specificity for identifying standard views, indicating a low rate of incorrectly identified views. (Implicitly, the reported value was deemed sufficient for clearance).0.984 (95% CI: 0.973 – 0.996)
Performance across subgroups (geographical region, US machine make, gestational age, mother's BMI, mother's age)Consistent performance across diverse subgroups."Performance was consistent across subgroups."
Performance across mother's race (Asian and Black mothers)Consistent performance across mother's race."Specificity 95% CI lower bound was slightly lower for Asian and Black mothers, possibly due to large confidence intervals and small sample size." (This indicates a slight deviation but was seemingly acceptable given the context of small sample size in those subgroups).
Supported Ultrasound Machine VendorsDevice performance should be validated for specific ultrasound machine makes.Validated with General Electric, Philips, Samsung, and Siemens ultrasound devices.
Supported Gestational AgeDevice performance should be validated for specific gestational ages.Validated for pregnancies at 18 weeks of gestation or later.
Supported Probe TypeDevice performance should be validated for specific probe types.Validated for transabdominal probes.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size for Test Set: 2290 clinically acquired images and frames from video clips. These were derived from 579 fetal ultrasound examinations.
  • Data Provenance: The data was retrospective, consisting of clinically acquired images and frames. The country of origin for the data includes U.S.A. and France.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Number of Experts: Two experts.
  • Qualifications of Experts: One sonographer and one MFM specialist (Maternal-Fetal Medicine) with experience in fetal echocardiography.

4. Adjudication Method for the Test Set

The adjudication method was described as a truthing process where the sonographer and MFM specialist independently determined the presence or absence of standard views. The document doesn't explicitly state a 2+1 or 3+1 method with a third tie-breaker, but it implies a consensus or agreement process between the two experts as they "determined" the presence or absence.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or reported in this document. The study described focuses on the standalone performance of the AI algorithm. Therefore, there is no reported effect size of how much human readers improve with AI vs. without AI assistance.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, a standalone performance study was done. The reported sensitivity and specificity values directly relate to the BrightHeart View Classifier's (algorithm only) ability to identify standard views independently. The document states: "The performance testing demonstrated that BrightHeart View Classifier identifies standard views with a mean standard view recognition sensitivity of 0.939..." This confirms a standalone performance evaluation.

7. The Type of Ground Truth Used

The type of ground truth used was expert consensus / clinical expert interpretation. It was derived through a "truthing process" by a sonographer and an MFM specialist.

8. The Sample Size for the Training Set

The document explicitly states: "The ultrasound examinations used for training and validation are entirely distinct from the examinations used in performance testing." However, the exact sample size for the training set is not provided in the given text.

9. How the Ground Truth for the Training Set Was Established

The document states: "The ultrasound examinations used for training and validation are entirely distinct from the examinations used in performance testing." While it confirms distinct data, it does not explicitly describe how the ground truth for the training set was established. It can be inferred that it likely followed a similar expert review process as the test set, but this information is not detailed in the provided text.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).