Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K251371
    Date Cleared
    2025-06-25

    (54 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605,K071967

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The XR-Series module (XR-20) is a quantitative multi-parameter automated hematology analyzer intended for in vitro diagnostic use in screening patient populations found in clinical laboratories.

    The XR-Series module classifies and enumerates the following parameters in whole blood: WBC, RBC, HGB, HCT, MCV, MCH, MCHC, PLT (PLT-I, PLT-F), NEUT%/#, LYMPH%/#, MONO%/#, EO%/#, BASO%/#, IG%/#, RDW-CV, RDW-SD, MPV, NRBC%/#, RET%/#, IPF, IPF#, IRF, RET-He and has a Body Fluid mode for body fluids. The Body Fluid mode enumerates the WBC-BF, RBC-BF, MN%/#, PMN%/#, and TC-BF# parameters in cerebrospinal fluid (CSF), serous fluids (peritoneal, pleural) and synovial fluids. Whole blood should be collected in K2EDTA or K3EDTA anticoagulant, and serous and synovial fluids in K2EDTA anticoagulant to prevent clotting of fluid. The use of anticoagulants with CSF specimens is neither required nor recommended.

    Device Description

    The Sysmex XR-Series module (XR-20) is a quantitative multi-parameter hematology analyzer intended to perform tests on whole blood samples collected in K2 or K3EDTA and body fluids (pleural, peritoneal and synovial) collected in K2EDTA anticoagulant. The analyzers can also perform tests on CSF, which should not be collected in any anticoagulant. The XR-Series analyzer consist of four principal units: (1) One Main Unit (XR-20) which aspirates, dilutes, mixes, and analyzes blood and body fluid samples; (2) One Auto Sampler Unit which supply samples to the Main Unit automatically; (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system; (4) Pneumatic Unit which supplies pressure and vacuum from the Main Unit.

    The XR-20 analyzer has an additional white progenitor cell (WPC) measuring channel and associated WPC reagents. The new WPC channel provides two separate flags for blasts and abnormal lymphocytes.

    AI/ML Overview

    The provided FDA 510(k) Clearance Letter details the performance testing conducted for the Sysmex XR-Series (XR-20) Automated Hematology Analyzer to demonstrate its substantial equivalence to the predicate device, Sysmex XN-20. Due to the nature of the document being an FDA clearance letter summarizing performance studies rather than the full study reports, some requested details (e.g., exact sample provenance for all studies beyond "US clinical sites," specific qualifications for all experts, and the comprehensive list of acceptance criteria for all individual parameters in specific studies) are not exhaustively provided.

    However, based on the information available, here's a breakdown of the acceptance criteria and the studies proving the device meets them:

    Overall Acceptance Criteria & Study Design Philosophy:

    The overarching acceptance criterion for this 510(k) submission is to demonstrate substantial equivalence to the predicate device, Sysmex XN-20 (K112605). This is primarily proven through:

    • Analytical Performance Studies: Demonstrating that the XR-20 analyzer's performance (precision, linearity, analytical specificity, stability, limits of detection, carry-over) is "acceptable" or "met manufacturer's specifications/predefined acceptance criteria requirements."
    • Method Comparison Studies: Showing a strong correlation and acceptable bias between the XR-20 and the predicate XN-20 for all claimed parameters across various patient populations and challenging samples.
    • Clinical Sensitivity and Specificity Studies: For flagging capabilities, demonstrating acceptable agreement (sensitivity/specificity, PPA/NPA/OPA) with a reference method (manual microscopy) and the predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't provide a single, consolidated table of all acceptance criteria for every parameter across all tests. Instead, it states that results "met manufacturer's specifications or predefined acceptance criteria requirements" for analytical performance tests, and provides specific correlation coefficients, slopes, intercepts, and mean differences/percent differences for method comparison studies.

    Here's a partial table based on the quantifiable data presented for Method Comparison Studies (Whole Blood - Combined Sites), which is a key performance indicator for substantial equivalence. The "Acceptance Criteria" are implied by what is generally considered acceptable in hematology analyzer comparisons for FDA submissions (high correlation, small bias), and explicitly stated for certain parameters like HGB.

    Implicit Acceptance Criteria (General expectation for Method Comparison based on FDA context):

    • Correlation Coefficient (r): Typically > 0.95 (ideally > 0.98 or 0.99 for robust parameters)
    • Slope: Close to 1.0 (ideally between 0.95 and 1.05)
    • Intercept: Close to 0
    • Mean Difference / % Mean Difference / Estimated Bias: Within clinically acceptable limits (often derived from biological variation or regulatory guidelines). The document explicitly mentions a bias limit for HGB: ±2% or 0.2 g/dL.

    Table 1: Partial Acceptance Criteria and Reported Device Performance (Method Comparison - Whole Blood)

    MeasurandAcceptance Criteria for r (Implied/Explicit)Reported rAcceptance Criteria for Slope (Implied)Reported Slope (95% CI)Acceptance Criteria for Mean Diff. / % Diff. (Implied/Explicit)Reported Mean Diff. / % Diff.Key Conclusion based on Criteria
    WBC> 0.99 (high)0.9999~1.01.003 (0.998, 1.007)Close to 00.17 / 0.96%Met
    RBC> 0.99 (high)0.9944~1.01.000 (0.993, 1.006)Close to 0-0.01 / -0.34%Met
    HGB> 0.99 (high)0.9954~1.00.993 (0.989, 0.998)±2% or 0.2 g/dL (explicit)-0.1 / -0.79% (Note: One site had -2.10% / -0.3 g/dL bias, stated as acceptable due to high r)Met (with explanation for one site's bias)
    HCT> 0.99 (high)0.9946~1.00.998 (0.993, 1.003)Close to 0-0.2 / -0.40%Met
    PLT-I> 0.99 (high)0.9990~1.01.005 (0.991, 1.020)Close to 0-2 / -0.72%Met
    PLT-F> 0.99 (high)0.9990~1.01.034 (1.019, 1.048)Close to 06 / 1.83%Met
    NRBC> 0.99 (high)0.9996~1.01.006 (0.996, 1.016)Close to 00.00 / 0.61%Met
    RET (%)> 0.99 (high)0.9931~1.01.033 (1.009, 1.057)Close to 00.06 / 2.68%Met
    IRF (%)~ 0.98 (high)0.9820~1.00.998 (0.983, 1.012)Close to 0-0.9 / -5.32%Met
    IPF (%)> 0.99 (high)0.9902~1.00.999 (0.976, 1.023)Close to 0-0.0 / -0.94%Met
    RET-He (pg)~ 0.96 (high)0.9616~0.93 (lower, but CI is tight)0.930 (0.906, 0.954)Close to 0-1.2 / -3.85%Met

    For other analytical performance studies (Precision, Linearity, Detection Limit, Carry-Over, Specificity, Stability), the document consistently states that the XR-20 "met manufacturer's specifications or predefined acceptance criteria requirements," supporting that specific numerical acceptance criteria were defined and achieved.


    2. Sample Size and Data Provenance

    • Test Set:

      • Whole Blood Method Comparison: A total of 865 unique residual whole blood samples.
      • Body Fluid Method Comparison: A total of 397 residual body fluid samples.
      • Provenance: All studies were conducted at three US clinical sites (for major studies like method comparison and reproducibility) or one internal site (for some linearity, stability, and matrix studies).
      • Retrospective/Prospective: Samples are described as "residual" (implying retrospective, de-identified leftover samples) or "prospectively collected" where specified (e.g., for some stability studies).
    • Training Set: The document does not specify a training set for the algorithm, as this is a traditional in-vitro diagnostic (IVD) device (Automated Hematology Analyzer) which likely relies on fixed algorithms and established measurement principles (RF/DC Detection, Sheath Flow DC Detection, Flow Cytometry) rather than a machine learning model that requires explicit training data for its core functionality. The performance characterization is about its analytical capabilities, not about learning from a dataset to perform a task.


    3. Number of Experts and Qualifications (for Ground Truth)

    • Clinical Sensitivity and Specificity (Flagging Capabilities): The ground truth for flagging capabilities was established by "manual differential counts and peripheral blood smear review by experienced examiners using light microscopy (reference method) at each of the three external clinical sites." The exact number and specific qualifications (e.g., "radiologist with 10 years of experience") are not provided, but the term "experienced examiners" implies qualified personnel (e.g., clinical laboratory scientists, pathologists). Given this is a hematology analyzer, these would typically be clinical laboratory specialists or hematopathologists.

    4. Adjudication Method (for the Test Set)

    • Clinical Sensitivity and Specificity (Flagging): The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for resolving discrepancies between multiple manual reviews. It states "peripheral blood smear review by experienced examiners," primarily using manual microscopy as the reference method. This implies there might be a single expert review or an internal consensus process, but no detail on conflict resolution is provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study was described. This type of study (MRMC) is typically performed for AI-assisted diagnostic tools where human reader performance is a direct outcome of interest and needs to be compared with and without AI assistance. The Sysmex XR-20 is an automated analyzer, a standalone device that performs measurements and classifications. While it outputs flags that may assist human review, its primary function isn't human-in-the-loop assistance in interpretation (like an AI for radiology image reading). Therefore, an MRMC study is not applicable for this device.

    6. Standalone (Algorithm Only) Performance

    • Yes, the primary performance studies are standalone algorithm/device performance. All analytical performance studies (Precision, Linearity, Detection Limit, Carry-Over, Analytical Specificity, Sample Stability) and the method comparison studies (comparing XR-20 results directly against the predicate XN-20) represent the standalone performance of the XR-20 analyzer. The device functions automatically without human input during the analysis process; therefore, human-in-the-loop performance is not directly evaluated as a primary outcome for its measurement capabilities.

    7. Type of Ground Truth Used

    • Analytical Ground Truth: For most analytical performance studies (Precision, Linearity, Detection Limits, Carry-Over), the "ground truth" is established by the performance of the predicate device (Sysmex XN-20) or by a well-controlled experimental setup (e.g., known dilutions for linearity, blank samples for LoB).
    • Clinical Ground Truth (for flagging): For clinical sensitivity and specificity of flagging capabilities, the ground truth was expert consensus / manual review against peripheral blood smear using light microscopy. The document refers to this as the "reference method."

    8. Sample Size for the Training Set

    • As mentioned in point 2, no explicit training set for a machine learning algorithm is described. This device's core functionality relies on established physical and chemical principles and traditional signal processing for cell counting and classification, not a learnable AI model from a training data set in the typical sense.

    9. How the Ground Truth for the Training Set Was Established

    • Since there's no described "training set" for an AI/ML algorithm, this point is not applicable in the context of this traditional IVD device. The "ground truth" for verifying its performance (as detailed above) was established through comparisons to a legally marketed predicate device (XN-20) and a gold standard manual method (microscopy).
    Ask a Question

    Ask a specific question about this device

    K Number
    K243283
    Date Cleared
    2025-02-20

    (126 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alinity h-series System is an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs) intended for screening patient populations found in clinical laboratories by qualified health care professionals. The Alinity h-series can be configured as:

    · One stand-alone automated hematology analyzer system.

    · A multimodule system that includes at least one Alinity hg analyzer module and may include one Alinity hs slide maker stainer module.

    The Alinity hq analyzer module provides complete blood count and a 6-part white blood cell differential for normal and abnormal cells in capillary and venous whole blood collected in K2EDTA. The Alinity hq analyzer provides quantitative results for the following measurands: WBC, NEU, %N, LYM, %M, EOS, %E, BASO, %B, IG, %IG, RBC, HCT, HGB, MCV, MCH, MCHC, MCHr, RDW, NRBC, NR/W, RETIC, %R, IRF, PLT, MPV, %rP. The Almity hq analyzer module is indicated to identify patients with hematologic parameters within and outside of established reference ranges. The Alinity hs slide maker stainer module automates whole blood film preparation and staining and stains externally prepared whole blood smears.

    For in vitro diagnostic use.

    Device Description

    The Alinity h-series System is a multimodule system that consists of different combinations of one or more of the following modules: a quantitative multi-parameter automated hematology analyzer (Alinity hg) and an automated slide maker stainer (Alinity hs).

    The Alinity hq is a quantitative, multi-parameter, automated hematology analyzer designed for in vitro diagnostic use in counting and characterizing blood cells using a multi-angle polarized scattered separation (MAPSS) method to detect and count red blood cells (RBC), nucleated red blood cells (NRBC), platelets (PLT), and white blood cells (WBC), and to perform WBC differentials (DIFF) in whole blood.

    There is also an option to choose whether to detect reticulocytes (RETIC) at the same time. The options of the selections are:

    • CBC+DIFF: Complete blood count with differential
    • CBC+DIFF+RETIC: Complete blood count with differential and reticulocyte ●

    The Alinity h-series of instruments has a scalable design to provide full integration of multiple automated hematology analyzers that can include the integration of an automated blood film preparation and staining module, all of which are controlled by one user interface. The modules are designed to fit together. Each module has an internal conveyor that enables racks of specimen tubes to be transported between modules. The system can move racks between modules to perform different tests on a given specimen (e.g., make slide smears on the Alinity hs).

    An Alinity h-series system can be configured as follows:

    • Configuration 1: 1 (Alinity hq) + 0 (Alinity hs) = 1+0
    • . Configuration 2: 1 (Alinity hq) + 1 (Alinity hs) = 1+1
    • . Configuration 3: 2 (Alinity hq) + 0 (Alinity hs) = 2+0
    • . Configuration 4: 2 (Alinity hq) + 1 (Alinity hs) = 2+1
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Alinity h-series System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state all acceptance criteria in a dedicated table format with clear "criteria" vs. "performance" columns for every test. However, it does mention that all results "met the predefined acceptance criteria" for various tests. The tables provided show the device's performance, and the accompanying text confirms it met criteria.

    Here's a consolidated view of relevant performance data presented, with implicit acceptance criteria being that the results are within acceptable ranges for clinical diagnostic instruments, or that they demonstrate improvement as intended by the software modification.

    Test CategoryMeasurandReported Device Performance (Subject Device SW 5.8)Acceptance Criteria (Implicit, based on "met predefined acceptance criteria" statements)
    Precision (Normal Samples)BASO ($\times 10^3/\mu L$)CBC+Diff: 0.021 SD (Range 0.01 to 0.12); CBC+Diff+Retic: 0.025 SD (Range 0.01 to 0.11)SD/%CV point estimates to be within predefined limits. (Explicitly stated: "All samples were evaluated against all applicable acceptance criteria and met all acceptance criteria.")
    %BASO (%)CBC+Diff: 0.352 SD, 41.04 %CV (Range 0.13 to 2.20); CBC+Diff+Retic: 0.455 SD, 41.08 %CV (Range 0.13 to 2.00)
    LYM ($\times 10^3/\mu L$)CBC+Diff: 0.068 SD (Range 1.10 to 2.01), 3.09 %CV (Range 1.94 to 3.05); CBC+Diff+Retic: 0.063 SD (Range 1.10 to 2.01), 3.17 %CV (Range 1.91 to 3.07)
    %LYM (%)CBC+Diff: 1.239 SD, 3.34 %CV (Range 13.80 to 57.80); CBC+Diff+Retic: 1.193 SD, 3.63 %CV (Range 13.40 to 58.10)
    WBC ($\times 10^3/\mu L$)CBC+Diff: 0.068 SD (Range 3.72 to 4.06), 2.71 %CV (Range 3.92 to 10.60); CBC+Diff+Retic: 0.085 SD (Range 3.72 to 4.04), 2.22 %CV (Range 3.93 to 10.40)
    Precision (Pathological Samples)WBC ($\times 10^3/\mu L$)Low: 0.083 SD (Range 0.06 to 2.01); High: 1.88 %CV (Range 41.40 to 209.00)SD or %CV point estimates to be within predefined limits. (Explicitly stated: "All results met the predefined acceptance criteria, demonstrating acceptable short-term precision...")
    BASO ($\times 10^3/\mu L$)Low WBC Related: 0.010 SD (Range 0.00 to 0.04)
    LYM ($\times 10^3/\mu L$)Low WBC Related: 0.040 SD (Range 0.12 to 0.74)
    LinearityWBCOverall Linearity Range: (0.00 to 448.58) $\times 10^3/\mu L$All results met the predefined acceptance criteria and were determined to be acceptable.
    Method Comparison (vs. Sysmex XN-10)BASO ($\times 10^3/\mu L$)r: 0.26 (0.22, 0.30); Slope: 1.25 (1.20, 1.30); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1812)Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..."
    %BASO (%)r: 0.44 (0.40, 0.48); Slope: 1.44 (1.39, 1.50); Intercept: -0.12 (-0.14, -0.09). (Sample Range 0.00 - 8.37, N=1812)
    LYM ($\times 10^3/\mu L$)r: 0.99 (0.99, 0.99); Slope: 0.99 (0.99, 1.00); Intercept: 0.02 (0.01, 0.02). (Sample Range 0.05 - 8.34, N=1598)
    %LYM (%)r: 1.00 (1.00, 1.00); Slope: 1.00 (0.99, 1.00); Intercept: 0.04 (0.04, 0.15). (Sample Range 0.34 - 84.60, N=1598)
    WBC ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1958)
    Method Comparison (vs. Predicate Device SW 5.0)BASO ($\times 10^3/\mu L$)r: 0.75 (0.73, 0.77); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1801)Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..."
    %BASO (%)r: 0.92 (0.91, 0.92); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 8.37, N=1801)
    LYM ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.05 - 8.34, N=1589)
    %LYM (%)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.34 - 84.60, N=1589)
    WBC ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1948)
    Clinical Sensitivity/SpecificityAny Morphological FlagsSensitivity: 67.57% (58.03%, 76.15%); Specificity: 77.55% (73.79%, 81.01%); Efficiency: 75.85% (72.37%, 79.09%). (N=650)Met predefined "acceptance criteria" (not explicitly given numerical targets, but stated as met).
    Any Distributional AbnormalitiesSensitivity: 83.02% (77.95%, 87.34%); Specificity: 80.59% (76.20%, 84.49%); Efficiency: 81.60% (78.37%, 84.54%). (N=636)
    Any Morphological and/or Distributional AbnormalitiesSensitivity: 80.98% (76.12%, 85.23%); Specificity: 76.09% (71.22%, 80.51%); Efficiency: 78.40% (75.02%, 81.51%). (N=648)
    Reference Range VerificationAll measurandsUpper bound of 95% CI for percentage of replayed results within predicate reference ranges was $\ge$ 95%.Upper bound of the two-sided 95% CI for the percentage of replayed results that were within the reference ranges of the predicate device was $\ge$ 95%. (Explicitly stated as met).
    Specific Improvement for Affected BASO SamplesBASO ($\times 10^3/\mu L$)Subject Device: r: 0.84 (0.75, 0.90); Slope: 1.17 (1.00, 1.32); Intercept: 0.00 (-0.01, 0.01). (Range 0.00 - 1.69, N=67). Predicate Device: r: 0.93 (0.90, 0.96); Slope: 2.22 (1.64, 2.80); Intercept: -0.01 (-0.05, 0.02). (Range 0.03 - 8.11, N=67) Demonstrates reduction of falsely increased basophils.Results for potentially impacted measurands (BASO and %BASO) must demonstrate reduction of falsely increased basophil measurements. "Additionally, the results demonstrate a reduction in the number of false positive sample %BASO classifications..."
    %BASO (%)Subject Device: r: 0.61 (0.44, 0.75); Slope: 1.22 (0.98, 1.52); Intercept: -0.08 (-0.39, 0.19). (Range 0.00 - 4.33, N=67). Predicate Device: r: 0.33 (0.10, 0.53); Slope: 0.54 (0.31, 0.83); Intercept: 1.83 (1.45, 2.05). (Range 2.00 - 4.49, N=67). Demonstrates reduction of falsely increased basophils.

    2. Sample Size Used for the Test Set and Data Provenance

    The "test set" for this submission largely refers to re-analyzing raw data from the predicate device's (K220031) submission using the new algorithm.

    • For Precision Studies (Normal Samples):
      • Sample Size: 20 unique healthy donors for CBC+Diff, 19 for CBC+Diff+Retic.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they are described as "healthy."
    • For Precision Studies (Pathological Samples and Medical Decision Levels):
      • Sample Size: Minimum of 16 donors per measurand and range, with a minimum of 4 repeatability samples per measurand and range (2 for CBC+Diff, 2 for CBC+Diff+Retic).
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they include "abnormal whole blood samples."
    • For Linearity:
      • Sample Size: RBC, HGB, NRBC used whole blood samples; WBC, PLT, RETIC used commercially available linearity kits. A minimum of 9 levels were prepared for each measurand.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Whole blood samples and commercial kits.
    • For Method Comparison Study (Subject Device vs. Predicate Device K220031 and Sysmex XN-10):
      • Sample Size: 2,194 unique venous and/or capillary specimens. 1,528 specimens from subjects with medical conditions, 244 without.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but collected across 7 clinical sites, representing a "wide variety of disease states (clinical conditions)" and "wide range of demographics (age and sex)."
      • Specific "affected samples" for basophil analysis: 67 samples.
    • For Specimen Stability Studies:
      • K2EDTA Venous & Capillary Whole Blood: 14 unique native venous, 30 unique native capillary.
      • Controlled Room Temp K2EDTA Venous & Capillary Whole Blood: 10 K2EDTA venous from healthy donors, 10 abnormal de-identified leftover K2EDTA venous, 20 normal K2EDTA capillary.
      • K3EDTA Venous Whole Blood: 14 unique native venous.
      • K3EDTA Capillary Whole Blood: 94 unique native capillary.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Samples from "apparently healthy donors" and "abnormal de-identified leftover" samples.
    • For Detection Limit:
      • Sample Size: 2 unique samples per day over a minimum of 3 days.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed).
    • For Clinical Sensitivity/Specificity:
      • Sample Size: A subset of 674 venous and capillary whole blood specimens from the method comparison study.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Collected from 6 clinical sites.
    • For Reference Range Verification:
      • Sample Size: Not explicitly stated but implied to be from the K220031 submission's reference range studies using "apparently healthy subjects" for adult and pediatric subgroups.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Clinical Sensitivity/Specificity Study:
      • Number of Experts: Two independent 200-cell microscopic reviews were performed. So, at least two experts per sample.
      • Qualifications: Not explicitly stated beyond "microscopic reviews of a blood smear (reference method)." It can be inferred these would be qualified laboratory professionals (e.g., medical technologists, clinical laboratory scientists) with expertise in manual differential counting, but specific years of experience or board certifications are not provided.
      • Ground Truth: The "final WBC differential and WBC, RBC, and PLT morphology results were based on the 400-cell WBC differential counts derived from the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results..."

    4. Adjudication Method for the Test Set

    • Clinical Sensitivity/Specificity Study: The ground truth was based on "the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results." This indicates an agreement-based adjudication method, likely a 2-of-2 consensus. If the two initial reviews did not concur, a third review/adjudication might have been employed (e.g., 2+1), but this is not explicitly stated. The phrasing "concurring 200-cell differential counts" strongly suggests initial agreement was required.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI assistance.
    • This submission describes an automated differential cell counter (Alinity h-series System) which is a standalone device for providing results without human assistance in the interpretation of the primary measurements (though human review of flags/smears may occur downstream).
    • The study primarily focuses on comparing the output of the device with its new algorithm (SW 5.8) to its previous version (SW 5.0) and to a predicate device (Sysmex XN-10). The clinical sensitivity/specificity study compares the device's algorithmic flags/differentials to microscopic analysis (human experts), but this is not an assistance study but rather a standalone performance evaluation against a gold standard.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance evaluation was primarily done. The core of this submission is about a software algorithm modification within an automated analyzer.
    • All analytical performance studies (precision, linearity, detection limits, stability) and method comparison studies (against the predicate and Sysmex XN-10) evaluate the Alinity hq (with the new algorithm) as a standalone instrument.
    • The clinical sensitivity and specificity study also evaluates the Alinity hq's ability to identify abnormalities and morphological flags independently, comparing its output directly to expert microscopic review (ground truth). There's no mention of a human-in-the-loop scenario where humans are presented with AI results to improve their performance.

    7. The Type of Ground Truth Used

    • Expert Consensus (Microscopy): For the clinical sensitivity/specificity study, the ground truth for WBC differentials and morphological flags was established by manual microscopic review (400-cell differential) by two independent experts, with results based on their concurrence. This is a form of expert consensus.
    • Reference Methods/Device: For analytical performance and method comparison studies, the ground truth was established by:
      • The Alinity hq with its previous software version (K220031) (for direct comparison to the subject device's new software).
      • Another legally marketed predicate device (Sysmex XN-Series (XN-10) Automated Hematology Analyzer K112605).
      • Known concentrations/values in control materials or linearity kits.
      • Clinically accepted laboratory practices and norms (e.g., for precision, stability).

    8. The Sample Size for the Training Set

    • The document does not provide information on the sample size used for the training set for the algorithm modification. Since this submission describes a modification to an existing algorithm ("modified logic for counting basophils"), it's possible the training was done prior to the original K220031 submission, or that new data was used for a specific refinement without explicit detail in this summary. The focus of this 510(k) is the evaluation of the modified algorithm on existing data and its performance compared to predicates, not the development process of the algorithm itself.

    9. How the Ground Truth for the Training Set Was Established

    • As the training set details are not provided, the method for establishing its ground truth is also not elaborated upon in this 510(k) summary. Given the nature of a hematology analyzer, it would typically involve meticulously characterized blood samples, often with manual microscopic differentials, flow cytometry reference methods, or other gold standards, aligned with clinical laboratory guidelines.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240636
    Date Cleared
    2024-05-02

    (57 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The HemoScreen is a point-of-care (POC) automated hematology analyzer intended for the enumeration and classification of the following parameters in capillary and venous whole blood (K2EDTA anticoagulated): WBC, RBC, HCT, MCV, MCH, MCHC, RDW, PLT, MPV, NEUT%, NEUT#, LYMP%, LYMP#, MONO%, MONO#, EO%, EO#, BASO%, and BASO#. The HemoScreen is for in vitro diagnostic use in clinical laboratories and/or POC settings for adults and children at least 2 years of age.

    Device Description

    HemoScreen is a point of care (POC), automated hematology analyzer that provides 20 common CBC parameters, including a 5-part leukocyte (WBC) differential, in capillary and venous whole blood samples. The HemoScreen analyzer (reader) is a tabletop device that is designed to use with a disposable reagent Cartridge. In addition to the Cartridge, the system includes a disposable Sampler with two glass capillaries which is used to collect the blood sample and then transfer it to the Cartridge.

    Once the Cartridge is inserted into the reader, there are no further procedural steps; blood is expelled from the capillaries (Sampler) into the reagent compartments (Cartridge). The reader then mixes the blood sample with the reagents by alternately pressing compressible portions of the Cartridge, eventually causing the suspension of cells to flow into the microfluidic chamber. Cells flowing in the microfluidic chamber focus into a single-cell plane due to a patented physical phenomenon known as viscoelastic focusing.

    The reader then captures images of the focused cells and analyzes them in real time using machine vision algorithms. When analysis is complete, the results are displayed to the user on the reader's touch screen and may be printed to an adjacent printer or exported to a USB flash drive. The Cartridge is ejected by the analyzer after analysis, and can then be safely disposed of, as the reagents and blood sample remain within the Cartridge.

    The basic staining and microscopic image analysis performed by HemoScreen closely resembles the traditional blood smear and the hemocytometer counting chamber. Leukocytes are classified based on their staining properties and morphology, whereas absolute counts are obtained by counting the cells contained in a chamber of predetermined volume. Test results are obtained within less than six (6) minutes and the results are saved.

    Quality Control: Commercial 3-level liquid quality controls, PIX-CBC Hematology Controls, are recommended for use with the HemoScreen. These controls cover all the tested parameters and are sampled the same way whole blood is sampled.

    Software: The HemoScreen software displays an intuitive, simple-to-use user interface that is operated via the touch screen. The software is responsible for operating the device, performing the measurements, and recording the results.

    AI/ML Overview

    The PixCell Medical Technologies HemoScreen Hematology Analyzer (K240636) demonstrated substantial equivalence to its predicate device (K222148) with extended analytical measuring ranges for WBC and PLT. The device performance was validated through non-clinical and performance validation studies.

    Here's a breakdown of the acceptance criteria and study details:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state acceptance criteria in table format but implies that the criteria were met if the comparison to the predicate device (Sysmex XN) showed strong correlation and acceptable Passing-Bablok regression results. The reported device performance is presented in the "Passing-Bablok regression and Pearson's correlation of HemoScreen vs. Sysmex XN" table. As the conclusion states, "The data indicated that the predefined acceptance criteria were met for all the 20 measurands and in all tested ranges."

    ParameterReported HemoScreen Result RangeReported Correlation Coefficient (r)Reported Passing-Bablok InterceptReported Passing-Bablok SlopeAcceptance Criteria (Implied: Strong correlation (r close to 1), Intercept close to 0, Slope close to 1)
    WBC (10³/µL)0.29-94.770.995-0.0340.999Met
    RBC (10⁶/µL)1.91-7.130.9970.0230.998Met
    HGB (g/dL)5.65-20.720.995-0.0060.993Met
    HCT (%)16.42-62.730.990-0.1801.006Met
    MCV (fL)53.33-111.470.9281.8180.979Met
    MCH (pg)16.94-37.240.9700.9700.953Met
    MCHC (g/dL)30.90-36.060.65410.5820.677Met
    RDW (%)11.32-27.340.9110.4110.955Met
    PLT (10³/µL)9.25-930.660.9900.7050.983Met
    MPV (fL)9.27-14.460.825-0.4321.055Met
    NEUT (10³/µL)0.00-83.110.994-0.0421.017Met
    LYMP (10³/µL)0.01-72.190.9470.0110.998Met
    ΜΟΝΟ (10³/µL)0.01-9.480.930-0.0061.006Met
    EOS (10³/µL)0.00-4.100.9460.0080.998Met
    BASO (10³/µL)0.00-0.770.415-0.0060.758Met
    NEUT (%)0.9-98.200.9610.1581.012Met
    LYMP (%)1.30-93.100.9800.7170.986Met
    ΜΟΝΟ (%)0.10-45.800.877-0.1461.005Met
    EO (%)0.00-34.100.8550.0871.016Met
    BASO (%)0.00-6.500.277-0.0760.764Met

    2. Sample size used for the test set and the data provenance

    • Sample Size for performance validation (WBC, PLT extended ranges): 232 residual whole blood venous samples.
    • Data Provenance: The linearity study was conducted at PixCell Medical, Israel. The document does not specify the country of origin for the 232 venous samples, nor explicitly states if the samples were retrospective or prospective, but "residual whole blood venous samples" usually implies retrospective use of leftover clinical samples.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not mention the use of experts to establish ground truth for the test set. The validation was a method comparison study against a legally marketed predicate device, the Sysmex XN-Series (XN-10, XN-20) Automated Hematology Analyzers (K112605). The Sysmex analyzer served as the reference method.

    4. Adjudication method for the test set

    Not applicable. The study compares the HemoScreen to a commercially available predicate device (Sysmex XN), not against a human-adjudicated ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This device is an automated hematology analyzer, not an AI-assisted diagnostic tool that involves human readers interpreting results.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the performance validation data presented is for the standalone device (HemoScreen Hematology Analyzer), with its measurements compared directly to those of the Sysmex XN automated hematology analyzer. There is no human-in-the-loop component described for this specific validation.

    7. The type of ground truth used

    The ground truth for the performance validation study was established using a legally marketed predicate device: Sysmex XN-Series (XN-10, XN-20) Automated Hematology Analyzers (K112605). This is a comparative method study, where the predicate device acts as the reference standard.

    8. The sample size for the training set

    The document does not provide information about a training set size. This suggests that the study performed here focused on analytical validation of the device's measurement accuracy against a predicate, rather than an AI/machine learning model whose performance would depend on a training set. The device uses "machine vision algorithms" but the specifics of their training and associated dataset sizes are not detailed in this 510(k) summary.

    9. How the ground truth for the training set was established

    Not applicable, as a training set and its ground truth establishment are not described in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K211840
    Device Name
    Sight OLO
    Date Cleared
    2022-05-09

    (329 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sight OLO is a quantitative multi-parameter automated hematology analyzer intended for in vitro diagnostic use in screening capillary or venous whole blood samples collection tubes, or fingertip samples collected using the Sight OLO test kit micro-capillary tubes.

    When used with the Sight OLO cartridge, the Sight OLO utilizes computer vision algorithms to enumerate the following CBC parameters in whole blood: WBC, RBC, HCT, MCV, MCH, MCHC, RDW, PLT, NEUT%/#, LYMPH %/#, MONO %/#, EOS%/#, and BASO%/#.

    The Sight OLO is indicated for use by clinical laboratories to identify and classify one or more of the formed elements of blood in children 3 months and above, adolescents and adults.

    Device Description

    The Sight OLO device is a computer vision based platform for blood analysis. The platform combines computer-vision algorithms for image processing to identify and quantify blood components (e.g., red blood cells) and their characteristics (e.g., cell volume) in an automated fashion. Using dedicated staining, the proposed platform provides complete blood count analysis. The Sight OLO is a compact device, designed to be automated and simple to operate, to enable rapid testing and analysis. The Sight OLO consists of a scanning and analyzing device and a CBC test kit, including a disposable cartridge and sample preparation tools. The disposable cartridge containing the blood sample is loaded into the device through the loading slot. The device is operated through the touch screen interface.

    The Sight OLO provides complete blood count information with 5-part differentials for white blood cell types. Specifically, the CBC parameters measured by the Sight OLO are listed below and include: WBC, RBC, HGB, HCT, MCV, MCH, MCHC, RDW, PLT, NEUT%/#, LYMPH %/#, MONO %/#, EOS%/# and BASO%/#. In addition, the Sight OLO signals specific WBC abnormal cases by flagging the sample.

    AI/ML Overview

    The original text provided is a 510(k) Premarket Notification from the FDA regarding the Sight OLO device. It details the device's technical specifications, indications for use, and performance data used to establish substantial equivalence to a predicate device.

    Here's an analysis of the acceptance criteria and study data based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The core of the acceptance criteria in this submission appears to be demonstrating substantial equivalence to a predicate device (Sight OLO K190898) and a reference device (Sysmex XN-Series Hematology Analyzer, K112605) for various blood parameters and flagging capabilities. The performance is assessed primarily through method comparison studies and flagging studies.

    Table of Acceptance Criteria (Implied) and Reported Device Performance:

    The document explicitly states that "all measurands met the prespecified acceptance criteria for correlation, bias, slope, intercept (and the 95% two-sided confidence interval (CI) around the slope and intercept)" for the method comparison. For flagging, it states the "overall flagging capabilities of the Sight OLO device met the predefined acceptance criteria for both sensitivity and specificity." While the exact numerical criteria for each parameter's correlation, slope, intercept, and bias are not explicitly listed as "acceptance criteria" but rather the results that met them, the table below reflects the reported performance that demonstrates meeting these criteria.

    Metric (Implied Acceptance Criteria)Device ParameterReported Performance (Result that met acceptance)
    Method Comparison
    Correlation Coefficient (r) (High r expected)WBC0.997
    RBC0.991
    PLT0.984
    HGB0.990
    HCT0.983
    MCV0.941
    RDW0.941
    MCH0.976
    MCHC0.687
    NEUT%0.988
    NEUT#0.996
    LYMPH%0.991
    LYMPH#0.995
    MONO%0.926
    MONO#0.947
    EOS%0.978
    EOS#0.980
    BASO%0.658
    BASO#0.646
    Slope (Close to 1 expected, 95% CI covering 1)Most parameterse.g., WBC: 1.016 (1.008, 1.024)
    Intercept (Close to 0 expected, 95% CI covering 0)Most parameterse.g., WBC: 0.014 (-0.025, 0.067)
    Median Bias (Low bias expected)All parameterse.g., WBC: 0.11
    Relative Bias (%) (Low bias expected)All parameterse.g., WBC: 1.92%
    Flagging Capability
    Sensitivity (PPA) (High expected)Overall Flagging91.0%
    Specificity (NPA) (High expected)Overall Flagging92.6%
    Overall Agreement (High expected)Overall Flagging91.8%

    Note on MCHC, BASO% and BASO# Correlation: The correlation coefficients for MCHC, BASO%, and BASO# are notably lower than other parameters (0.687, 0.658, 0.646 respectively). However, the document states "all measurands met the prespecified acceptance criteria," implying these values were acceptable for the purpose of demonstrating substantial equivalence. This could be due to factors like the analytical measuring range of these parameters or inherent variability.


    Study Details:

    1. Sample sizes used for the test set and the data provenance:

      • Method Comparison Study:

        • Sample Size: A total of 700 residual clinical K2EDTA whole blood samples. The 'N' column in the method comparison results table shows slight variations (e.g., 662 for WBC, 674 for RBC), indicating samples might have been excluded for specific parameter analysis for various reasons (e.g., insufficient volume, issues during analysis).
        • Data Provenance:
          • Country of Origin: Three (3) US sites.
          • Retrospective or Prospective: The samples were "residual clinical" samples and re-run with the updated algorithm, suggesting they were previously collected, indicating a retrospective approach to the sample acquisition for the re-run study, though the initial collection for K190898 might have been prospective. The text says, "The samples previously collected in K190898 were re-run with the updated algorithm of the subject device."
          • Sample Characteristics: Included normal and pathological samples (e.g., acute inflammation, various anemias, leukemias etc.). Covered an age range of 3 months to 94 years old, with 32% pediatric samples (3M-21Y). 365 males (52%) and 335 females (48%).
      • Flagging Study:

        • Sample Size: Over 200 samples.
        • Data Provenance:
          • Country of Origin: 3 clinical study sites (location not explicitly stated but implied US given other elements of the submission).
          • Retrospective or Prospective: "The samples previously collected in K190898 were re-run with the updated algorithm of the subject device," indicating a retrospective re-analysis with the new algorithm.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Flagging Study (Ground Truth for Flagging):
        • Number of Experts: Two primary qualified morphology examiners (Reader A and Reader B), with a third qualified morphology examiner (arbitrator) used in case of disagreement. So, a total of 3 experts potentially involved for each discordant case.
        • Qualifications: Referred to as "qualified morphology examiners." Specific experience in years or board certification is not detailed, but their "qualification" is stated.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Flagging Study (for Ground Truth of Manual Microscopy): A 2+1 adjudication method was used. "Two qualified morphology examiners evaluated one of the three blood films... The third blood film was saved for reading by a third qualified morphology examiner (i.e., arbitrator) in the event that there was disagreement between Reader A and Reader B."
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study involving human readers improving with AI assistance is described in this document. The studies presented are primarily a comparison of the device's performance to a reference device (Sysmex) and to manual microscopy (for flagging), not a human-AI teamed performance study. The device is an automated analyzer, not an AI-assisted human reading tool in the sense of an MRMC study.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, the performance data presented for the Sight OLO device is for its standalone performance. The device is described as an "automated hematology analyzer" that "utilizes computer vision algorithms to enumerate" parameters. The listed performance metrics (correlation, bias, sensitivity, specificity) reflect the device's output independently.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • Method Comparison Study: The ground truth was established by a reference device, the Sysmex XN-Series Hematology Analyzer. This is a common method for validating new automated hematology analyzers, where an established, cleared device serves as the comparator.
      • Flagging Study: The ground truth was established by expert consensus (manual light microscopy combined with adjudication by "qualified morphology examiners"). This is often considered the gold standard for morphological assessment of blood cells.
    7. The sample size for the training set:

      • The document does not specify the sample size for the training set. It focuses on the performance data for the updated algorithm applied to previously collected samples. The exact details of the original training data for the computer vision algorithms are not provided within this summary for this specific 510(k).
    8. How the ground truth for the training set was established:

      • The document does not explicitly state how the ground truth for the training set was established. It describes the "minor modifications to analysis algorithms" made to increase actionable results and improve flagging specificity/reduce invalidation. It also mentions that "the device is a computer vision based platform for blood analysis." However, the methodology for establishing the ground truth for the training of these computer vision algorithms is typically proprietary and not detailed in this type of summary. It is generally assumed to involve expert-labeled data, similar to the "qualified morphology examiners" used for the flagging study's ground truth, but this is not confirmed in the provided text for the training set itself.
    Ask a Question

    Ask a specific question about this device

    K Number
    K181288
    Device Name
    Athelas One
    Manufacturer
    Date Cleared
    2018-11-05

    (173 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Athelas One is indicated for use for quantitative determination of white blood cells (WBC) and Neutrophil percentages (NEUT%) in capillary or K2EDTA venous whole blood. The Athelas One system is for In Vitro Diagnostic use only. The Athelas One is only to be used with Athelas One Test Strips. The Athelas One is indicated for use in clinical laboratories and for point of care settings. The Athelas One is only indicated for use in adult populations (aged 21 and older).

    Device Description

    The Athelas One is an automated diagnostic device intended to perform tests on whole blood samples collected in K2EDTA or capillary finger stick samples collected directly into the Athelas One test strip. The system is intended to be placed within the Point of Care, and Clinical Laboratory sites. Athelas One returns WBC and Neut% metrics from the blood sample. A clinician places a sample on the Athelas One test strip either directly from finger or via pipette from K2EDTA whole blood tube. The Athelas One test strip stains and creates a monolayer of the blood sample within the chamber. The strip is inserted into the test strip slot of the Athelas One device and the device returns results by conducting image analysis of cells present. Result-viewing and device control are conducted through a companion tablet/mobile application.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Athelas One device based on the provided document:

    Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Target Evaluation)Reported Device PerformanceStudy Section
    WBC Precision7.5% CV above 2K/µL WBC. 0.25 K/µL SD below 2K/µL WBC.Within-run Precision: All samples met the 7.5% CV target. For example, a mean value of 2.20 K/µL had a total CV of 5.87%, and a mean value of 23.33 K/µL had a total CV of 5.81%.
    Between-run Precision (Reproducibility): Total CV for WBC Low (2.746 K/µL) was 5.583%. Total CV for WBC Medium (7.546 K/µL) was 5.726%. Total CV for WBC High (15.246 K/µL) was 6.305%. All met the acceptance criteria.Bench: Within-run Precision/Reproducibility, Precision Reproducibility (between run)
    WBC Bias/Error±7.5% error above 2K/µL WBC. ±0.25 K/µL error below 2K/µL WBC.Method Comparison: Mean Bias for WBC (1.1 - 23 K/µL) was -0.151 K/µL (-2.31%).
    Specimen Stability: Met ± 7.5% Bias for WBC for samples tested within 24 hours.Clinical: Method Comparison, Specimen Stability Studies
    Neutrophil % Precision5% SD OR 15% CV (updated based on Sysmex XW-100 Neut% CV criteria).Within-run Precision: Not explicitly stated in the WBC Summarized table for Neutrophils.
    Between-run Precision (Reproducibility): Total CV for NEUT % Low (50.781%) was 6.780%. Total CV for NEUT % Medium (49.990%) was 6.689%. Total CV for NEUT % High (50.823%) was 6.525%. All met the acceptance criteria (which allows for 15% CV if 5% SD is not met).Bench: Precision Reproducibility (between run)
    Neutrophil % Bias±10% bias or ±5% Neut% total error (whichever larger).Method Comparison: Mean Bias for Neutrophil % (8 - 92.89%) was 0.636% (1.18%).
    Specimen Stability: Met ± 10% Bias for Neutrophils for samples tested within 24 hours.Clinical: Method Comparison, Specimen Stability Studies
    Linearity (WBC)Not a single target, but demonstrated to be linear with acceptable max % diff for each interval.R² = 0.997, Slope = 1.013, Intercept = 0.0449, CVr = 5.08%. Demonstrated to be linear from lower limit to upper limit and within measured allowable max % diff.Bench: Linearity
    Flagging Accuracy (Morphological Flags)≥ 90% accuracyPositive Agreement (Sensitivity) = 90.91%. Negative Agreement (Specificity) = 96.71%. Overall Agreement = 94.87%. Met the specification.Clinical: Flagging Comparison
    Matrix Comparability (Venous vs. Capillary)No statistical difference and met evaluation criteria for regression parameters and 95% CI of bias.WBC: Slope 1.026 (CI: 1.000, 1.055), Intercept -0.145 (CI: -0.319, 0.018), Mean Bias 0.056 K/µL (-0.588%). R = 0.996.
    Neutrophil %: Slope 0.999 (CI: 0.938, 1.058), Intercept 0.457 (CI: -2.502, 3.614), Mean Bias 0.162 Percentage Points (-0.333%). R = 0.969. Concluded no statistical difference.Clinical: Matrix Comparison

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Within-run Precision: 9 whole blood samples, 90 tests per sample (total 810 tests). Provenance not explicitly stated beyond "K2EDTA whole blood samples" and "normal and abnormal samples."
      • Precision/Reproducibility (between run): 3 levels of quality control material, 80 readings per level at each of 3 sites (total 720 readings for controls). Provenance not explicitly stated.
      • Linearity: 10 samples run in 4 replicates across 4 devices (total 160 measurements + varying replicates/devices for different points in the linearity study, but "10 samples" is the core). Provenance of samples not explicitly stated, but "pooling together one low WBC concentration fresh whole blood sample one high WBC concentration sample."
      • Interfering Substances: "Various substances... found naturally occurring in patient samples, or were spiked in whole blood." Specific sample size not provided.
      • Reference Intervals: 120 healthy donors. Provenance not explicitly stated.
      • Limit of Blank (LoB): 120 total repeated measurements of blank samples.
      • Limit of Detection (LoD): 5 low-level samples, 2 replicates per sample over 3 days (total ~30 measurements per lot for low-level samples, plus 60 blank measurements per lot).
      • Limit of Quantification (LoQ): 4 independent low-level whole blood samples, 3 replicates per sample.
      • Specimen Stability: 9 different venous blood samples (low, normal, high WBC levels).
      • Test Strip Stability: Not a patient sample study, but involved testing control fluid.
      • Method Comparison: 312 patient samples. Provenance: "taken at 3 point of care sites in the US." The document implies prospective collection as they were "run ... on the XE-5000" and then analyzed by Athelas One.
      • Flagging Comparison: The same 312 patient samples used in Method Comparison.
      • Matrix Comparison: 59 patients who provided both capillary finger-prick and venous whole blood samples.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document does not explicitly state the number of experts or their qualifications used to establish ground truth.
      • For the Method Comparison and Flagging Comparison, the "ground truth" was the Sysmex XE-5000 predicate analyzer. This is a legally marketed device and generally considered a gold standard in this context.
      • For the Linearity study, "original concentrations were obtained from the predicate analyzer (Sysmex XE-5000)."
    3. Adjudication method for the test set:

      • Not applicable as the ground truth was primarily established by a predicate device (Sysmex XE-5000) or by CLSI-recommended methods for bench studies. There was no mention of human expert adjudication for discrepancies.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was done. This document describes a standalone performance study of the Athelas One device against a predicate device, not a human-in-the-loop study comparing human performance with and without AI assistance.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance study was done. The Athelas One device, which uses "computer vision based image analysis" (i.e., its algorithm), was compared directly against a predicate automated hematology analyzer (Sysmex XE-5000) for WBC and Neutrophil % measurement.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The primary ground truth for clinical comparisons (Method Comparison, Flagging Comparison) was the Sysmex XE-5000 automated hematology analyzer, a legally marketed predicate device.
      • For bench studies like Linearity and LoQ, concentrations were often referenced against the Sysmex XE-5000 as well.
      • For precision studies, the device's own measurements were evaluated for consistency against predefined statistical targets (SD, CV).
      • For interfering substances, the impact on bias and precision performance was evaluated, likely against expectations from predicate devices or known clinical impact.
    7. The sample size for the training set:

      • The document does not provide information on the training set size or how the algorithm was developed. This submission focuses on the validation of the finished device for regulatory approval, not its development process.
    8. How the ground truth for the training set was established:

      • Since information on the training set itself is not provided, how its ground truth was established is also not described in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K180020
    Date Cleared
    2018-10-29

    (300 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K112605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The HemoScreen is a point-of-care (POC) automated hematology analyzer intended for the enumeration and classification of the following parameters in capillary and venous whole blood (K2EDTA anticoagulated): WBC, RBC, HGB, HCT, MCV, MCH, MCHC, RDW, PLT, MPV, NEUT%, NEUT#, LYMP%, LYMP#, MONO%, MONO#, E0%, E0%, BASO %, and BASO#. The HemoScreen is for in vitro diagnostic use in clinical laboratories and/or POC settings for adults and children at least 2 years of age.

    Device Description

    HemoScreen is a point of care (POC), automated hematology analyzer that provides 20 common CBC parameters, including a 5-part leukocyte (WBC) differential, in capillary and venous whole blood samples. The HemoScreen analyzer (reader) is a tabletop device that is designed to use with a disposable reagent cartridge. In addition to the cartridge, the system includes a disposable sampler with two glass capillaries which is used to collect the blood sample and then transfer it to the cartridge. Once the cartridge is inserted into the reader, there are no further procedural steps; blood is expelled from the capillaries (sampler) into the reagent compartments (cartridge). The reader then mixes the blood sample with the reagents by alternately pressing compressible portions of the cartridge, eventually causing the suspension of cells to flow into the microfluidic chamber. Cells flowing in the microfluidic chamber focus into a single-cell plane due to a patented physical phenomenon known as viscoelastic focusing. The reader then captures images of the focused cells and analyzes them in real time using machine vision algorithms. When analysis is complete, the results are displayed to the user on the reader's touch screen and may be printed to an adjacent printer or exported to a USB flash drive. The cartridge is ejected by the analyzer after analysis, and can then be safely disposed of, as the reagents and blood sample remain within the cartridge. The basic staining and microscopic image analysis performed by HemoScreen closely resembles the traditional blood smear and the hemocytometer counting chamber. Leukocytes are classified based on their staining properties and morphology, whereas absolute counts are obtained by counting the cells contained in a chamber of predetermined volume. Test results are obtained within six (6) minutes and the results are saved.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the HemoScreen Hematology Analyzer based on the provided text:

    Acceptance Criteria and Device Performance

    The acceptance criteria are generally implied by the statement "the redefined acceptance criteria were met for all the 20 measurands and in all tested ranges" for precision, and "the 95% confidence intervals for mean bias / mean relative bias were within the acceptance limits" and "the correlation point estimates met the acceptance criteria" for method comparison. Specific numerical acceptance limits are not explicitly stated in the provided text for all parameters (e.g., a specific Pearson correlation threshold, or a specific mean bias range), but the tables present the reported device performance.

    Table of Acceptance Criteria (Implied) and Reported Device Performance (Focus on Method Comparison data, as this provides a clearer standard for comparison):

    Parameter (Units)Implied Acceptance Criteria (e.g., acceptable Pearson Correlation, acceptable Mean Bias)Reported Pearson Correlation (r) vs. SysmexReported Mean Bias vs. SysmexReported Pearson Correlation (r) vs. Blood SmearReported Mean Bias vs. Blood Smear
    WBC Parameters
    WBC (x 10³/μL)Within acceptance limits0.9930.09 ( -0.14%)N/AN/A
    NEUT (x 10³/μL)Within acceptance limits0.9890.22 (3.83%)N/AN/A
    LYM (x 10³/μL)Within acceptance limits0.9970.12 (5.46%)N/AN/A
    MON (x 10³/μL)Within acceptance limits0.718-0.16 (-27.08%)N/AN/A
    EOS (x 10³/μL)Within acceptance limits0.981-0.00 (-4.33%)N/AN/A
    BAS (x 10³/μL)Within acceptance limits (BASO Study)0.4840.04 (11.87%)N/AN/A
    NEUT (%)Within acceptance limits0.9841.90 (3.71%)0.960-0.30 (-0.23%)
    LYM (%)Within acceptance limits0.9910.96 (5.11%)0.9611.01 (8.45%)
    MON (%)Within acceptance limits0.780-1.87 (-26.90%)0.594-0.17 (-2.41%)
    EOS (%)Within acceptance limits0.981-0.06 (-2.97%)0.9430.13 (16.01%)
    BAS (%)Within acceptance limits (BASO Study)N/AN/A0.725-0.01 (43.99%)
    RBC Parameters
    RBC (x 10⁶/μL)Within acceptance limits0.9890.02 (0.52%)N/AN/A
    HGB (g/dL)Within acceptance limits0.9850.13 (0.81%)N/AN/A
    HCT (%)Within acceptance limits0.9821.15 (3.05%)N/AN/A
    MCV (fL)Within acceptance limits0.9562.27 (2.53%)N/AN/A
    MCH (pg)Within acceptance limits0.9570.06 (0.20%)N/AN/A
    MCHC (g/dL)Within acceptance limits0.675-0.78 (-2.34%)N/AN/A
    RDW (%)Within acceptance limits0.946-0.03 (-0.12%)N/AN/A
    PLT Parameters
    PLT (x 10³/μL)Within acceptance limits0.96714.57 (5.12%)N/AN/A
    MPV (fL)Within acceptance limits0.8310.21 (1.93%)N/AN/A

    Note on Basophils (BAS# and BAS%): The initial method comparison notes that basophil data were "not meaningful" due to the distribution of samples. A separate "BASO Study" was conducted for basophils, which yielded specific correlation and bias values as shown.


    Study Information:

    1. Sample sizes used for the test set and the data provenance:

    • Method Comparison & Clinical Sensitivity/Specificity:
      • Sample Size: 495 normal and pathological residual whole blood specimens.
      • Data Provenance: Collected across three clinical sites (retrospective, as they were "residual" samples).
    • Basophil-specific study ("BASO Study"):
      • Sample Size: 95 whole blood samples that included high levels of basophils.
      • Data Provenance: Not explicitly stated if it's from the same three clinical sites or if it's retrospective/prospective, but likely similar to the main method comparison.
    • Flagging Study:
      • Sample Size: 402 whole blood specimens.
      • Data Provenance: Analyzed across three clinical sites (retrospective, as samples were "analyzed").
    • Reference Intervals:
      • Sample Size: Minimum of 120 male subjects and 120 female subjects (total 243 subjects reported: 123 female, 120 male).
      • Data Provenance: Single US site, freshly collected K2EDTA venous blood from healthy (self-reported) adult (19-69 years old) male and female volunteers on a single occasion (prospective).
    • Vein to Capillary Equivalency:
      • Sample Size: 75 normal and pathological paired capillary and venous whole blood specimens.
      • Data Provenance: Drawn from volunteer subjects across three clinical sites (prospective, as samples were "drawn").

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Method Comparison & Clinical Sensitivity/Specificity, Flagging, Vein to Capillary Equivalency Studies:
      • For blood smear analysis (part of the ground truth for differentials and flagging), "Experienced operators trained to use the automated blood smear performed the differentials, including morphology evaluation, and the results were verified by these operators." The exact number or specific qualifications (e.g., "radiologist with 10 years of experience") are not provided, other than "experienced operators."
      • For the comparative instrument (Sysmex XN Series), "laboratory personnel" performed the analyses. Specific qualifications are not detailed.
    • BASO Study (ground truth for basophils):
      • Light microscopy (standard reference method) was used for %basophils, performed by "experienced operators" (implied, similar to other blood smear analyses).
      • Sysmex method was used for absolute counts, performed by "laboratory personnel."

    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The document mentions that for blood smear analysis, "the results were verified by these operators," suggesting internal verification rather than a formal multi-expert adjudication method like 2+1 or 3+1. It does not explicitly state a formal adjudication process for discordant results between the device and the ground truth.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a MRMC comparative effectiveness study involving human readers improving with/without AI assistance was not done. The HemoScreen is an automated hematology analyzer, not an AI-assisted diagnostic tool for human readers. The clinical studies compare its performance against established laboratory methods (Sysmex) and manual blood smear analysis, not evaluating human reader performance.

    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Yes, the studies evaluate the standalone performance of the HemoScreen Hematology Analyzer. The device is an automated system that performs enumeration and classification using its internal machine vision algorithms. The "method comparison" sections directly assess the algorithm's performance against reference methods.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For enumeration parameters (WBC, RBC, HGB, PLT etc.): The ground truth was primarily established by a legally marketed predicate device, the Sysmex XN Series, and in some cases, by other reference methods like centrifugation for LoB/LoD/LoQ studies.
    • For differential parameters (NEUT%, LYM%, MONO%, EOS%, BASO%): The ground truth was established by the predicate device (Sysmex XN Series) and manual blood film differential counts (microscopy) performed by "experienced operators trained to use the automated blood smear," following CLSI H20-A2 guidelines (2 slides / 200 cells counted per slide for a total of 400 cells). This falls under a form of expert consensus/manual review.
    • For Flagging: Ground truth was based on differential results from the predicate and manual blood smears.
    • For LoB/LoD/LoQ: Processed blood samples measured by a "comparative method" (likely a reference method capable of precise low-level measurement).

    7. The sample size for the training set:

    • The document does not explicitly state the sample size for the training set. The provided information pertains to the validation studies (nonclinical and clinical data proving performance). Automated analyzers like HemoScreen employ machine vision algorithms, which would have been trained on a separate dataset, but the details of this training dataset are not included in the 510(k) summary.

    8. How the ground truth for the training set was established:

    • As the training set size is not provided, the method for establishing its ground truth is also not described in this document. For such devices, ground truth for training would typically involve large datasets of images with expert-verified cell classifications and counts.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1