Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    Why did this record match?
    Device Name :

    UniCel DxH 900 Coulter Cellular Analysis System; UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The UniCel DxH 900/DxH 690T Coulter Cellular Analysis System is a quantitative, multi-parameter, automated hematology analyzer for in vitro diagnostic use in screening patient populations found in clinical laboratories.

    The DxH 900/DxH 690T analyzer identifies and enumerates the following parameters:

    · Whole Blood (Venous or Capillary): WBC, RBC, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV, NE%, NE#, LY%, LY#, MO%, MO#, EO%, EO%, BA%, BA#, NRBC%, NRBC#, RET%, RET#, MRV, IRF
    · Pre-Diluted Whole Blood (Venous or Capillary): WBC, RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV

    · Body Fluids (cerebrospinal, serous or synovial): TNC and RBC

    The UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.

    Device Description

    The UniCel DxH 900/DxH 690T System contains an automated hematology analyzer (DxH 900 or DxH 690T) designed for in vitro diagnostic use in screening patient populations by clinical laboratories. The system provides a Complete Blood Count (CBC), Leukocyte 5-Part Differential (Diff), Reticulocyte (Retic), Nucleated Red Blood Cell (NRBC) on whole blood, as well as, Total Nucleated Count (TNC), and Red Cell Count (RBC) on Body Fluids (cerebrospinal, serous and synovial).

    The DxH Slidemaker Stainer II is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.

    The DxH 900 System may consist of a workcell (multiple connected DxH 900 instruments with or without a DxH Slidemaker Stainer II), a stand-alone DxH 900, or a stand-alone DxH Slidemaker Stainer II. The DxH 690T System consists of a stand-alone DxH 690T instrument.

    AI/ML Overview

    The provided text is a 510(k) Summary for a medical device submission (K240252) for the UniCel DxH 900/DxH 690T Coulter Cellular Analysis System and the UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System. This document focuses on demonstrating substantial equivalence to predicate devices rather than proving a device meets specific acceptance criteria as would be the case for a novel device or a device requiring clinical efficacy trials.

    Therefore, the acceptance criteria are largely implied by the performance of the predicate device and established CLSI (Clinical and Laboratory Standards Institute) guidelines for analytical performance. The "study" proving the device met acceptance criteria is a series of non-clinical performance verification tests designed to demonstrate that the new devices (DxH 900, DxH 690T, SMS II) perform "as well as or better than" the predicate devices (DxH 800, SMS) across various analytical parameters.

    Here's an attempt to structure the information based on your request, understanding that the context is substantial equivalence testing, not a novel device demonstrating de novo clinical acceptance.


    Device Under Evaluation for Substantial Equivalence:

    • UniCel DxH 900 Coulter Cellular Analysis System
    • UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System
    • UniCel DxH 690T Coulter Cellular Analysis System

    Predicate Devices:

    • UniCel DxH 800 Coulter Cellular Analysis System (K193124)
    • UniCel DxH Slidemaker Stainer Coulter Cellular Analysis System (K162414)

    The "acceptance criteria" for the new devices are generally linked to demonstrating performance comparable to, or better than, the predicate devices, adhering to established analytical performance standards (e.g., CLSI guidelines). The "study" involves various analytical performance tests comparing the subject devices to the predicate.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are generally implied by the predicate device's performance specifications and adherence to CLSI guidelines. The performance reported below demonstrates that the subject devices meet these implicit criteria (i.e., they perform comparably to the predicate). Due to the extensive list of parameters and a lack of explicit, single "acceptance limit" given for each, I will provide summary tables where possible, extrapolating from the comprehensive data provided in the text.

    a. Repeatability (Within-run Imprecision)

    • Acceptance Criteria (Implied): Percent Coefficient of Variation (%CV) or Standard Deviation (SD) within specified limits, typically reflecting clinically acceptable imprecision and comparable to predicate device performance.
    • Reported Device Performance (DxH 900 - selected parameters; all passed):
    ParameterUnitsLevelNTest Result MeanTest Result %CV or SDConclusion
    WBCx10³ cells/µL5.000 - 10.000105.800.69% CVPass
    RBCx10⁶ cells/µL4.500 - 5.500104.720.50% CVPass
    Hgbg/dL14.00 - 16.001015.030.36% CVPass
    Plateletx10³ cells/µL200.0 - 400.010256.801.35% CVPass
    Neut %%50.0 - 60.01058.240.99 %CVPass
    Retic %%0.000 - 1.500101.170.13 SDPass
    BF-RBCcells/mm³10,000 - 15,0001012,6432.42 %CVPass
    BF-TNCcells/mm³50-2,000105941.28 %CVPass

    (Similar comprehensive data provided for DxH 690T, all passed.)

    b. Reproducibility (Across-site/day/instrument Imprecision)

    • Acceptance Criteria (Implied): Reproducibility (Total CV%) to be within clinically acceptable limits, demonstrating consistent performance across different instruments, days, and runs. The test instruments met the reproducibility specifications for all parameters.
    • Reported Device Performance (DxH 900-3S workcell - examples for Level 1, all passed):
    ParameterUnitN (total)Reproducibility CV%Conclusion
    WBC10^3 cells/uL902.30Pass
    RBC10^6 cells/uL900.66Pass
    HGBg/dL900.52Pass
    PLT10^3 cells/uL901.64Pass
    BF TNCcells/mm^3907.28Pass
    BF RBCcells/mm^3903.97Pass

    c. Linearity

    • Acceptance Criteria: Deviation between measured and predicted values to be within specified acceptance limits for each parameter across the analytical measuring interval (AMI). All instances showed "Pass".
    • Reported Linearity Ranges (all passed on DxH 900-3S workcell):
    ParameterUnitsLinearity Range Results
    WBC10³ cells/µL0.064 - 408.5
    RBC10⁶ cells/µL0.001 - 8.560
    PLT10³ cells/µL3.2 - 3002
    HGBg/dL0.04 - 26.070
    BF-RBCcells/mm³1113.10 - 6,353,906
    BF-TNCcells/mm³31.50 - 92,745

    d. Carryover

    • Acceptance Criteria: Carryover to be below specified percentages/event counts (e.g., for WBC, RBC, Hgb, PLT limits 90):**
    CnDR ModeWBCRBCHgbPLTDiffNRBCRetic
    DxH 690T0.11%0.03%0.26%0.07%22, 20, 3512, 5, 725, 20, 22
    DxH 9000.09%0.05%0.23%0.17%11,15,377,1,310,6,3
    Spec120).
    *   **Method Comparison (Whole Blood):** 735 whole blood specimens from 3 clinical sites (adult and pediatric samples).
    *   **Method Comparison (Body Fluid):** 195 body fluid specimens (BF TNC), 130 body fluid specimens (BF RBC) from multiple sites.
    *   **Flagging Analysis:** 735 whole blood samples (residual normal and abnormal) from three (3) clinical sites.
    
    • Data Provenance: Data collected from multiple clinical sites (indicated for method comparison and flagging analysis), and testing included analysis on workcell configurations (DxH 900-3S workcell) as well as stand-alone instruments. The data appears to be prospective as it involves active testing of the new devices. The countries of origin are not specified but typical of FDA submissions, implies US-based or international sites compliant with US regulations.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not mention experts and their qualifications for establishing ground truth. This is a common characteristic of analytical performance studies for IVD devices like hematology analyzers. The "ground truth" for quantitative measurements (e.g., WBC, RBC counts) is typically the measurement itself obtained from a reference method or predicate device, often with rigorous calibration and quality control. For qualitative aspects like flagging, the ground truth is established by comparing the flag output to the predicate device's flag output, assuming the predicate's performance is already validated. There is no indication of human "expert" review for individual case ground truth for these types of measurements.

    4. Adjudication Method for the Test Set

    Since ground truth is based on predefined analytical measurements against reference methods/predicate devices rather than human interpretation, an adjudication method like 2+1 or 3+1 (common in image-based AI studies) is not applicable and not mentioned in the document.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not done. These types of studies are typically performed for AI-assisted diagnostic devices where the AI is intended to improve human reader performance (e.g., radiologists interpreting images). This submission is for an automated hematology analyzer, where the device performs the analysis directly without human interpretation in the loop in the same way. The evaluation is focused on the device's analytical performance (accuracy, precision, linearity, etc.) compared to its predicate device.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the core of this submission is a standalone performance evaluation of the DxH 900/690T systems. The "algorithm" here refers to the instrument's internal processing of raw signals to derive reported parameters (e.g., cell counts, differentials). Its performance is assessed independently of human intervention during the measurement process, and its output is compared to a reference standard (predicate device) and expected analytical capabilities.

    7. The Type of Ground Truth Used

    The ground truth for the analytical performance studies (repeatability, linearity, method comparison, etc.) is based on:

    • Reference Method / Predicate Device Comparison: The performance of the UniCel DxH 900/DxH 690T is directly compared against the established performance of the legally marketed predicate UniCel DxH 800 and previously cleared UniCel DxH Slidemaker Stainer. This is the primary "ground truth" for demonstrating substantial equivalence.
    • CLSI Guidelines: Adherence to established CLSI (Clinical and Laboratory Standards Institute) protocols (e.g., EP09c, H26-A2, EP05-A3, EP06-A, EP17-A2, EP28-A3c) implicitly defines the "truth" or expected range of acceptable analytical performance for these types of in vitro diagnostic devices.
    • Control Materials and Calibrators: Certified reference materials and quality control products (e.g., COULTER 6C Cell Control, COULTER S-CAL Calibrator) are used to establish and verify instrument accuracy and precision, serving as a daily "ground truth" check.
    • Fresh Patient Samples: Used in method comparison and other studies to ensure real-world performance reflects clinical conditions.

    There is no mention of "expert consensus," "pathology," or "outcomes data" being used as ground truth in the traditional sense for these analytical performance studies of a hematology analyzer.

    8. The Sample Size for the Training Set

    The document does not provide information on the training set size. Hematology analyzers, while highly sophisticated, are typically rule-based systems or calibrated analytical instruments, rather than machine learning/AI models that require explicit "training sets" in the modern sense (e.g., thousands of labeled images for deep learning). Their "training" involves instrument calibration using standardized calibrators. If there are adaptive algorithms or older machine learning components, their "training" data is internal to the manufacturer's development process and not typically disclosed in a 510(k) summary focused on analytical validation.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, the concept of a "training set" and its "ground truth" in the context of a modern AI/ML device is not directly applicable to this traditional analytical instrument unless it has specific, undisclosed internal adaptive algorithms. The "ground truth" for the instrument's operational accuracy and precision is primarily established through its calibration process using certified calibrator materials and verified against quality control materials and comparisons to reference methods or predicate devices as part of its manufacturing and analytical validation.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1