Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K243851
    Device Name
    CHLOE BLAST
    Manufacturer
    Date Cleared
    2025-08-15

    (242 days)

    Product Code
    Regulation Number
    884.6195
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    Fairtility Ltd.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CHLOE BLAST is indicated to provide adjunctive information on events occurring during embryo development that may predict further development to the blastocyst stage on Day 5 of development. This adjunctive information aids in the selection of embryo(s) for transfer on Day 3, when, following morphological assessment, there are multiple embryos deemed suitable for transfer or freezing.

    CHLOE BLAST is to be used only for the analysis of images captured by the EmbryoScope version D incubator system.

    Device Description

    CHLOE BLAST is a decision support tool designed to automatically analyze time lapse videos of developing embryos, retrieved from EmbryoScope (version D) Time Lapse Incubators (TLI) system. It is intended to provide adjunctive information on developmental events up to Day 3 that may predict progression to the blastocyst stage by Day 5.

    CHLOE BLAST is a cloud-based software as a medical device (SaMD) that uses a convolutional neural network (CNN) to analyze TLI videos from insemination to Day 3. The output is the "CHLOE Score", which is a blastocyst development prediction value associated with the likelihood of the embryo reaching blastocyst stage at Day 5.

    This information aids in the selection of embryo(s) for transfer on Day 3, when, following morphological assessment, there are multiple normally fertilized embryos deemed suitable for transfer or freezing. In a clinical setting, the CHLOE score is intended to be used by the embryologist as adjunctive information, to be used only after the embryologists complete their independent morphological assessments based on the lab's standard of care (e.g., Istanbul Consensus Grading).

    The main user interaction is via the graphic user interface (GUI) available via Chrome browsers. It includes screens for treatments overview, manual embryo assessment, and score presentation, and integrates with the day-to-day normal operation in IVF clinics using TLI.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided FDA clearance letter:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Note: The document presents acceptance criteria primarily as "AUC lower bound >0.8" for various performance metrics. It also establishes an Odds Ratio (OR) greater than 1 as the primary endpoint for clinical utility.

    Metric / TestAcceptance CriterionReported Device PerformanceMeets Criterion?
    Non-Clinical Performance - Algorithm Validation
    Morphokinetic Events Detection Accuracy (Overall)N/A (Accuracy reported, not AUC)0.82 (95% CI: 0.81, 0.84)N/A
    Morphokinetic Events Detection Accuracy (2PNs)N/A (Accuracy reported, not AUC)0.84 (95% CI: 0.83, 0.85)N/A
    Morphokinetic Events Detection (Overall AUC)AUC lower bound >0.8N/A (Accuracy reported, not AUC for overall)Yes (Implicitly, as sub-model AUCs are mentioned in relation to this criterion)
    Morphokinetic Events Detection (2PNs Sub-model AUC)AUC lower bound >0.80.84 (95% CI: 0.83, 0.85) - This appears to be the accuracy value, not AUC. The text states "Accuracy of the sub-model... was 0.84". However, it immediately follows the criterion "AUC lower bound >0.8 were met." This is a slight inconsistency in the document's reporting. Assuming the 0.84 is indeed AUC, then: YesYes (Assuming 0.84 refers to AUC)
    Morphokinetic Events Detection (Sub-groups: Age 0.8Not met (Performance was not consistent, indicating some subgroups might not have met the criterion, though specific AUC values for these subgroups are not provided)No (Stated in text)
    Morphokinetic Events Detection (Sub-groups: Underweight, Obese BMI)AUC lower bound >0.8Not met (Performance was not consistent, indicating some subgroups might not have met the criterion, though specific AUC values for these subgroups are not provided)No (Stated in text)
    Blast Prediction (Overall AUC)AUC lower bound >0.80.88 (95% CI: 0.86, 0.90)Yes
    Blast Prediction (All Subgroups except Obese BMI)AUC lower bound >0.8AUC similar and higher than 0.8Yes
    Blast Prediction (Obese BMI Subgroup AUC)AUC lower bound >0.8Not met (However, specific AUC for this subgroup is not provided, only that it "was not met")No (Stated in text)
    Blast Prediction (2PN embryos AUC)N/A (Reduction in AUC observed, but no specific criterion for this subgroup)0.81 (95% CI: 0.78, 0.83)N/A (But still > 0.8)
    Blast Prediction (Good/Fair embryos AUC)N/A (Reduction in AUC observed, but no specific criterion for this subgroup)0.74 (95% CI: 0.69, 0.78)N/A (Lower than 0.8, but explanation given for clinical study focusing on this subgroup)
    Non-Clinical Performance - Reproducibility Test
    AUC with Optical AugmentationsAUC lower bound >0.8All AUCs > 0.89, CI lower bound > 0.87Yes
    Clinical Performance - Primary Endpoint
    Odds Ratio (OR) for Good/Fair Embryos (CHLOE-assisted)OR > 15.67 (95% CI: 4.6, 6.99)Yes
    Clinical Performance - Secondary Endpoints (Highlights)
    OR for All Embryos (CHLOE-assisted)N/A (Secondary endpoint)8.51 (95% CI: 6.97, 10.38)N/A
    Sensitivity (CHLOE-assisted)N/A (Performance measure)0.846N/A
    Specificity (CHLOE-assisted)N/A (Performance measure)0.444N/A
    PPV (CHLOE-assisted)N/A (Performance measure)0.629N/A
    NPV (CHLOE-assisted)N/A (Performance measure)0.721N/A
    OR for Individual Embryologists (CHLOE-assisted)OR > 1Improved and > 1 for all embryologistsYes
    OR in Subgroups (Age and BMI) (CHLOE-assisted)OR > 1OR > 1 in all subgroups (lower bound of CI > 1 in all but one age and one BMI category)Yes (Mostly)
    Subject-level Sensitivity (CHLOE-assisted)N/A (Performance measure)87.50% to 92.86%N/A
    Top 2 Embryo Analysis OR (CHLOE-assisted)N/A (Performance measure)10.73 (95% CI: 6.19, 18.60)N/A

    Study Details Proving Device Meets Acceptance Criteria

    2. Sample Sizes and Data Provenance

    • Non-Clinical Performance (Algorithm Validation):
      • Morphokinetic Events Detection: 1,094 embryos from 143 slides. Collected from two sites: one in the US and one in Norway. The data provenance is retrospective, as it's a "test dataset... entirely independent from the dataset utilized in the CHLOE BLAST clinical study."
      • Blast Prediction: 1,726 embryos from 233 slides. Collected from two sites: one in the US and one in Norway. The data provenance is retrospective.
    • Clinical Performance (CHLOE BLAST Clinical Study):
      • 703 embryos from 59 mothers.
      • Data collected from three different sites located in the United States.
      • Data provenance: Prospective collection for the purpose of this study (described as a "pivotal, multicenter, single arm, observational, prospective assessment study").

    3. Number of Experts and Qualifications for Ground Truth

    • Non-Clinical Performance (Algorithm Validation):
      • Morphokinetic Stages and Blast Annotations: Three independent embryologists.
      • Qualifications: "The annotators were not involved in the training or tuning of the model and were blinded to each other's labels." No explicit years of experience are stated for these annotators.
    • Clinical Performance (CHLOE BLAST Clinical Study):
      • Morphology Grading (Assessors): Three embryologists.
      • Qualifications: "blinded to CHLOE information," and performed grading according to SART standards. No explicit years of experience are stated.
      • Clinical Assessment (Panelists): Five independent embryologists.
      • Qualifications: All "in practice during the study period and from a range of geographical areas within the United States." 3 were senior embryologists with over 10 years of clinical embryology experience each, and the other 2 were junior embryologists with less than 3 years of clinical embryology experience.

    4. Adjudication Method for the Test Set

    • Non-Clinical Performance (Algorithm Validation):
      • Ground Truth: "Each embryo video was viewed by three independent embryologists who provided their morphokinetic stages and Blast annotations based on the time-lapse videos. The annotators were not involved in the training or tuning of the model and were blinded to each other's labels." It implies a consensus-based approach, but directly states, "The TLI videos were annotated at a frame level with the ground truth of one of the morphokinetic stages and at a video level with blastulation results."
    • Clinical Performance (CHLOE BLAST Clinical Study):
      • Morphology Grading (Assessors): "Then, the following parameters were categorized by majority agreement (at least 2 of 3 Assessors): Severe asymmetry (yes/no), Fragmentation > 25% (yes/no), Number of cells (1 through 8, 9≤)." This is a clear 2 out of 3 (2+1) consensus method for specific parameters.
      • Clinical Assessment (Panelists): No explicit adjudication method is stated for the Panelists' predictions. Each Panelist performed their own independent predictions, and the study analyzed the collective performance as well as individual improvements.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was done as part of the clinical performance study.
    • Effect Size of Human Readers Improvement with AI vs. without AI assistance:
      • The primary endpoint focused on the Odds Ratio (OR) for predicting blastocyst formation in Good/Fair embryos.
      • Without AI assistance (Morphology Only): OR = 3.77 (95% CI: 2.97, 4.79)
      • With AI assistance (Morphology + CHLOE Score): OR = 5.67 (95% CI: 4.6, 6.99)
      • This represents an improvement in the Odds Ratio from 3.77 to 5.67 for the primary endpoint.
      • For all embryos, the OR improved from 6.93 (without CHLOE) to 8.51 (with CHLOE).
      • For subject-level sensitivity, it improved from 80.36%-83.93% (traditional morphology) to 87.50%-92.86% (with CHLOE).
      • For Top 2 Embryo analysis, the OR improved from 3 (without CHLOE) to 10.73 (with CHLOE).

    6. Standalone (Algorithm Only without Human-in-the-Loop) Performance

    • Yes, a standalone performance assessment was done as part of the "Non-Clinical Performance – Algorithm Validation" section.
    • The algorithm's performance in predicting blastocyst formation was assessed independently, yielding an AUC of 0.88 (95% CI: 0.86, 0.90). This demonstrates the algorithm's capability on its own.

    7. Type of Ground Truth Used

    • For Non-Clinical Performance (Algorithm Validation):
      • Expert Consensus: Morphokinetic stages and blast annotations were established by three independent embryologists.
      • Outcomes Data: The "blastulation results" (blastocyst Yes/No) are actual outcomes.
    • For Clinical Performance (CHLOE BLAST Clinical Study):
      • Expert Consensus: Morphology grading by three "Assessors" with majority agreement (2 out of 3).
      • Outcomes Data: The "actual blastocyst outcome" (Yes/No) which the algorithm and human readers are predicting.

    8. Sample Size for the Training Set

    • The document states: "The study dataset included data collected specifically for the purpose of this study according to the predefined inclusion and exclusion criteria and was segregated from algorithm training and verification datasets."
    • "The dataset used for the performance test was entirely independent from the dataset utilized in the CHLOE BLAST clinical study described in section 9, and the clinics that provided data for the performance dataset were not used to collect data for the clinical study."
    • The specific sample size for the training set is NOT PROVIDED in this document. It only clearly states that the various test sets were independent from the training data.

    9. How Ground Truth for the Training Set Was Established

    • The document implies that the training data exists and was used to develop the CNN, but it does NOT specify how the ground truth for the training set was established. It only focuses on how ground truth was established for the independent testing and clinical validation sets.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1