Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K243283
    Date Cleared
    2025-02-20

    (126 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Alinity h-series System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alinity h-series System is an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs) intended for screening patient populations found in clinical laboratories by qualified health care professionals. The Alinity h-series can be configured as:

    · One stand-alone automated hematology analyzer system.

    · A multimodule system that includes at least one Alinity hg analyzer module and may include one Alinity hs slide maker stainer module.

    The Alinity hq analyzer module provides complete blood count and a 6-part white blood cell differential for normal and abnormal cells in capillary and venous whole blood collected in K2EDTA. The Alinity hq analyzer provides quantitative results for the following measurands: WBC, NEU, %N, LYM, %M, EOS, %E, BASO, %B, IG, %IG, RBC, HCT, HGB, MCV, MCH, MCHC, MCHr, RDW, NRBC, NR/W, RETIC, %R, IRF, PLT, MPV, %rP. The Almity hq analyzer module is indicated to identify patients with hematologic parameters within and outside of established reference ranges. The Alinity hs slide maker stainer module automates whole blood film preparation and staining and stains externally prepared whole blood smears.

    For in vitro diagnostic use.

    Device Description

    The Alinity h-series System is a multimodule system that consists of different combinations of one or more of the following modules: a quantitative multi-parameter automated hematology analyzer (Alinity hg) and an automated slide maker stainer (Alinity hs).

    The Alinity hq is a quantitative, multi-parameter, automated hematology analyzer designed for in vitro diagnostic use in counting and characterizing blood cells using a multi-angle polarized scattered separation (MAPSS) method to detect and count red blood cells (RBC), nucleated red blood cells (NRBC), platelets (PLT), and white blood cells (WBC), and to perform WBC differentials (DIFF) in whole blood.

    There is also an option to choose whether to detect reticulocytes (RETIC) at the same time. The options of the selections are:

    • CBC+DIFF: Complete blood count with differential
    • CBC+DIFF+RETIC: Complete blood count with differential and reticulocyte ●

    The Alinity h-series of instruments has a scalable design to provide full integration of multiple automated hematology analyzers that can include the integration of an automated blood film preparation and staining module, all of which are controlled by one user interface. The modules are designed to fit together. Each module has an internal conveyor that enables racks of specimen tubes to be transported between modules. The system can move racks between modules to perform different tests on a given specimen (e.g., make slide smears on the Alinity hs).

    An Alinity h-series system can be configured as follows:

    • Configuration 1: 1 (Alinity hq) + 0 (Alinity hs) = 1+0
    • . Configuration 2: 1 (Alinity hq) + 1 (Alinity hs) = 1+1
    • . Configuration 3: 2 (Alinity hq) + 0 (Alinity hs) = 2+0
    • . Configuration 4: 2 (Alinity hq) + 1 (Alinity hs) = 2+1
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Alinity h-series System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state all acceptance criteria in a dedicated table format with clear "criteria" vs. "performance" columns for every test. However, it does mention that all results "met the predefined acceptance criteria" for various tests. The tables provided show the device's performance, and the accompanying text confirms it met criteria.

    Here's a consolidated view of relevant performance data presented, with implicit acceptance criteria being that the results are within acceptable ranges for clinical diagnostic instruments, or that they demonstrate improvement as intended by the software modification.

    Test CategoryMeasurandReported Device Performance (Subject Device SW 5.8)Acceptance Criteria (Implicit, based on "met predefined acceptance criteria" statements)
    Precision (Normal Samples)BASO ($\times 10^3/\mu L$)CBC+Diff: 0.021 SD (Range 0.01 to 0.12); CBC+Diff+Retic: 0.025 SD (Range 0.01 to 0.11)SD/%CV point estimates to be within predefined limits. (Explicitly stated: "All samples were evaluated against all applicable acceptance criteria and met all acceptance criteria.")
    %BASO (%)CBC+Diff: 0.352 SD, 41.04 %CV (Range 0.13 to 2.20); CBC+Diff+Retic: 0.455 SD, 41.08 %CV (Range 0.13 to 2.00)
    LYM ($\times 10^3/\mu L$)CBC+Diff: 0.068 SD (Range 1.10 to 2.01), 3.09 %CV (Range 1.94 to 3.05); CBC+Diff+Retic: 0.063 SD (Range 1.10 to 2.01), 3.17 %CV (Range 1.91 to 3.07)
    %LYM (%)CBC+Diff: 1.239 SD, 3.34 %CV (Range 13.80 to 57.80); CBC+Diff+Retic: 1.193 SD, 3.63 %CV (Range 13.40 to 58.10)
    WBC ($\times 10^3/\mu L$)CBC+Diff: 0.068 SD (Range 3.72 to 4.06), 2.71 %CV (Range 3.92 to 10.60); CBC+Diff+Retic: 0.085 SD (Range 3.72 to 4.04), 2.22 %CV (Range 3.93 to 10.40)
    Precision (Pathological Samples)WBC ($\times 10^3/\mu L$)Low: 0.083 SD (Range 0.06 to 2.01); High: 1.88 %CV (Range 41.40 to 209.00)SD or %CV point estimates to be within predefined limits. (Explicitly stated: "All results met the predefined acceptance criteria, demonstrating acceptable short-term precision...")
    BASO ($\times 10^3/\mu L$)Low WBC Related: 0.010 SD (Range 0.00 to 0.04)
    LYM ($\times 10^3/\mu L$)Low WBC Related: 0.040 SD (Range 0.12 to 0.74)
    LinearityWBCOverall Linearity Range: (0.00 to 448.58) $\times 10^3/\mu L$All results met the predefined acceptance criteria and were determined to be acceptable.
    Method Comparison (vs. Sysmex XN-10)BASO ($\times 10^3/\mu L$)r: 0.26 (0.22, 0.30); Slope: 1.25 (1.20, 1.30); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1812)Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..."
    %BASO (%)r: 0.44 (0.40, 0.48); Slope: 1.44 (1.39, 1.50); Intercept: -0.12 (-0.14, -0.09). (Sample Range 0.00 - 8.37, N=1812)
    LYM ($\times 10^3/\mu L$)r: 0.99 (0.99, 0.99); Slope: 0.99 (0.99, 1.00); Intercept: 0.02 (0.01, 0.02). (Sample Range 0.05 - 8.34, N=1598)
    %LYM (%)r: 1.00 (1.00, 1.00); Slope: 1.00 (0.99, 1.00); Intercept: 0.04 (0.04, 0.15). (Sample Range 0.34 - 84.60, N=1598)
    WBC ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1958)
    Method Comparison (vs. Predicate Device SW 5.0)BASO ($\times 10^3/\mu L$)r: 0.75 (0.73, 0.77); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1801)Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..."
    %BASO (%)r: 0.92 (0.91, 0.92); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 8.37, N=1801)
    LYM ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.05 - 8.34, N=1589)
    %LYM (%)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.34 - 84.60, N=1589)
    WBC ($\times 10^3/\mu L$)r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1948)
    Clinical Sensitivity/SpecificityAny Morphological FlagsSensitivity: 67.57% (58.03%, 76.15%); Specificity: 77.55% (73.79%, 81.01%); Efficiency: 75.85% (72.37%, 79.09%). (N=650)Met predefined "acceptance criteria" (not explicitly given numerical targets, but stated as met).
    Any Distributional AbnormalitiesSensitivity: 83.02% (77.95%, 87.34%); Specificity: 80.59% (76.20%, 84.49%); Efficiency: 81.60% (78.37%, 84.54%). (N=636)
    Any Morphological and/or Distributional AbnormalitiesSensitivity: 80.98% (76.12%, 85.23%); Specificity: 76.09% (71.22%, 80.51%); Efficiency: 78.40% (75.02%, 81.51%). (N=648)
    Reference Range VerificationAll measurandsUpper bound of 95% CI for percentage of replayed results within predicate reference ranges was $\ge$ 95%.Upper bound of the two-sided 95% CI for the percentage of replayed results that were within the reference ranges of the predicate device was $\ge$ 95%. (Explicitly stated as met).
    Specific Improvement for Affected BASO SamplesBASO ($\times 10^3/\mu L$)Subject Device: r: 0.84 (0.75, 0.90); Slope: 1.17 (1.00, 1.32); Intercept: 0.00 (-0.01, 0.01). (Range 0.00 - 1.69, N=67). Predicate Device: r: 0.93 (0.90, 0.96); Slope: 2.22 (1.64, 2.80); Intercept: -0.01 (-0.05, 0.02). (Range 0.03 - 8.11, N=67) Demonstrates reduction of falsely increased basophils.Results for potentially impacted measurands (BASO and %BASO) must demonstrate reduction of falsely increased basophil measurements. "Additionally, the results demonstrate a reduction in the number of false positive sample %BASO classifications..."
    %BASO (%)Subject Device: r: 0.61 (0.44, 0.75); Slope: 1.22 (0.98, 1.52); Intercept: -0.08 (-0.39, 0.19). (Range 0.00 - 4.33, N=67). Predicate Device: r: 0.33 (0.10, 0.53); Slope: 0.54 (0.31, 0.83); Intercept: 1.83 (1.45, 2.05). (Range 2.00 - 4.49, N=67). Demonstrates reduction of falsely increased basophils.

    2. Sample Size Used for the Test Set and Data Provenance

    The "test set" for this submission largely refers to re-analyzing raw data from the predicate device's (K220031) submission using the new algorithm.

    • For Precision Studies (Normal Samples):
      • Sample Size: 20 unique healthy donors for CBC+Diff, 19 for CBC+Diff+Retic.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they are described as "healthy."
    • For Precision Studies (Pathological Samples and Medical Decision Levels):
      • Sample Size: Minimum of 16 donors per measurand and range, with a minimum of 4 repeatability samples per measurand and range (2 for CBC+Diff, 2 for CBC+Diff+Retic).
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they include "abnormal whole blood samples."
    • For Linearity:
      • Sample Size: RBC, HGB, NRBC used whole blood samples; WBC, PLT, RETIC used commercially available linearity kits. A minimum of 9 levels were prepared for each measurand.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Whole blood samples and commercial kits.
    • For Method Comparison Study (Subject Device vs. Predicate Device K220031 and Sysmex XN-10):
      • Sample Size: 2,194 unique venous and/or capillary specimens. 1,528 specimens from subjects with medical conditions, 244 without.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but collected across 7 clinical sites, representing a "wide variety of disease states (clinical conditions)" and "wide range of demographics (age and sex)."
      • Specific "affected samples" for basophil analysis: 67 samples.
    • For Specimen Stability Studies:
      • K2EDTA Venous & Capillary Whole Blood: 14 unique native venous, 30 unique native capillary.
      • Controlled Room Temp K2EDTA Venous & Capillary Whole Blood: 10 K2EDTA venous from healthy donors, 10 abnormal de-identified leftover K2EDTA venous, 20 normal K2EDTA capillary.
      • K3EDTA Venous Whole Blood: 14 unique native venous.
      • K3EDTA Capillary Whole Blood: 94 unique native capillary.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Samples from "apparently healthy donors" and "abnormal de-identified leftover" samples.
    • For Detection Limit:
      • Sample Size: 2 unique samples per day over a minimum of 3 days.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed).
    • For Clinical Sensitivity/Specificity:
      • Sample Size: A subset of 674 venous and capillary whole blood specimens from the method comparison study.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Collected from 6 clinical sites.
    • For Reference Range Verification:
      • Sample Size: Not explicitly stated but implied to be from the K220031 submission's reference range studies using "apparently healthy subjects" for adult and pediatric subgroups.
      • Data Provenance: Retrospective (raw data files from K220031 submission were replayed).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Clinical Sensitivity/Specificity Study:
      • Number of Experts: Two independent 200-cell microscopic reviews were performed. So, at least two experts per sample.
      • Qualifications: Not explicitly stated beyond "microscopic reviews of a blood smear (reference method)." It can be inferred these would be qualified laboratory professionals (e.g., medical technologists, clinical laboratory scientists) with expertise in manual differential counting, but specific years of experience or board certifications are not provided.
      • Ground Truth: The "final WBC differential and WBC, RBC, and PLT morphology results were based on the 400-cell WBC differential counts derived from the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results..."

    4. Adjudication Method for the Test Set

    • Clinical Sensitivity/Specificity Study: The ground truth was based on "the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results." This indicates an agreement-based adjudication method, likely a 2-of-2 consensus. If the two initial reviews did not concur, a third review/adjudication might have been employed (e.g., 2+1), but this is not explicitly stated. The phrasing "concurring 200-cell differential counts" strongly suggests initial agreement was required.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI assistance.
    • This submission describes an automated differential cell counter (Alinity h-series System) which is a standalone device for providing results without human assistance in the interpretation of the primary measurements (though human review of flags/smears may occur downstream).
    • The study primarily focuses on comparing the output of the device with its new algorithm (SW 5.8) to its previous version (SW 5.0) and to a predicate device (Sysmex XN-10). The clinical sensitivity/specificity study compares the device's algorithmic flags/differentials to microscopic analysis (human experts), but this is not an assistance study but rather a standalone performance evaluation against a gold standard.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance evaluation was primarily done. The core of this submission is about a software algorithm modification within an automated analyzer.
    • All analytical performance studies (precision, linearity, detection limits, stability) and method comparison studies (against the predicate and Sysmex XN-10) evaluate the Alinity hq (with the new algorithm) as a standalone instrument.
    • The clinical sensitivity and specificity study also evaluates the Alinity hq's ability to identify abnormalities and morphological flags independently, comparing its output directly to expert microscopic review (ground truth). There's no mention of a human-in-the-loop scenario where humans are presented with AI results to improve their performance.

    7. The Type of Ground Truth Used

    • Expert Consensus (Microscopy): For the clinical sensitivity/specificity study, the ground truth for WBC differentials and morphological flags was established by manual microscopic review (400-cell differential) by two independent experts, with results based on their concurrence. This is a form of expert consensus.
    • Reference Methods/Device: For analytical performance and method comparison studies, the ground truth was established by:
      • The Alinity hq with its previous software version (K220031) (for direct comparison to the subject device's new software).
      • Another legally marketed predicate device (Sysmex XN-Series (XN-10) Automated Hematology Analyzer K112605).
      • Known concentrations/values in control materials or linearity kits.
      • Clinically accepted laboratory practices and norms (e.g., for precision, stability).

    8. The Sample Size for the Training Set

    • The document does not provide information on the sample size used for the training set for the algorithm modification. Since this submission describes a modification to an existing algorithm ("modified logic for counting basophils"), it's possible the training was done prior to the original K220031 submission, or that new data was used for a specific refinement without explicit detail in this summary. The focus of this 510(k) is the evaluation of the modified algorithm on existing data and its performance compared to predicates, not the development process of the algorithm itself.

    9. How the Ground Truth for the Training Set Was Established

    • As the training set details are not provided, the method for establishing its ground truth is also not elaborated upon in this 510(k) summary. Given the nature of a hematology analyzer, it would typically involve meticulously characterized blood samples, often with manual microscopic differentials, flow cytometry reference methods, or other gold standards, aligned with clinical laboratory guidelines.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220031
    Date Cleared
    2023-08-04

    (576 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Alinity h-series System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alinity h-series System is an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs) intended for screening patient populations found in clinical laboratories by qualified health care professionals. The Alinity h-series System can be configured as:

    • · One standalone automated hematology analyzer system.
      · A multimodule system that includes at least one Alinity hq analyzer module and may include one Alinity hs slide maker stainer module.
      The Alinity hq analyzer module provides complete blood count and a 6-part white blood cell differential for normal and abnormal cells in capillary and venous whole blood collected in K2EDTA. The Alinity hq analyzer provides quantitative results for the following measurands: WBC, NEU, %N, MON, %M, EOS, %E, BASO, %B, 1G, %IG, RBC, HCT, HGB, MCV, MCH, MCHC, MCHr, RDW, NRBC, NR/W, RETIC, %R, IRF, PLT, MPV, %oP. The Alinity hq analyzer module is indicated to identify patients with hematologic parameters within and outside of established reference ranges. The Alinity hs slide maker stainer module automates whole blood film preparation and staining and stains externally prepared whole blood smears.
      For in-vitro diagnostic use.
    Device Description

    The Alinity h-series system is a multimodule system that consists of different combinations of one or more of the following modules: a quantitative multi-parameter automated hematology analyzer (Alinity hg) and an automated slide maker stainer (Alinity hs).
    The modules are designed to fit together. Each module has an internal conveyor that enables racks of specimen tubes to be transported between modules. The system can move racks between modules to perform different tests on a given specimen (e.g., make slide smears on the Alinity hs).

    AI/ML Overview

    Here's an analysis of the acceptance criteria and supporting studies for the Abbott Laboratories Alinity h-series System, based on the provided FDA 510(k) summary:

    Key Takeaways from the Document:

    • This 510(k) pertains to the Abbott Alinity h-series System, an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs).
    • The device provides complete blood count and a 6-part white blood cell differential.
    • The comparison is primarily against a predicate device: Sysmex® XN-Series (XN-10, XN-20) Automated Hematology Analyzers (K112605).
    • The studies presented focus on analytical performance demonstrating substantial equivalence, rather than a comparative effectiveness study with human readers (MRMC).

    1. Table of Acceptance Criteria and Reported Device Performance

    The document details performance against predefined acceptance criteria, which are largely implied through the presentation of results being "within the predefined acceptance criteria and found to be acceptable." Specific quantitative acceptance criteria are not explicitly stated in a single table but are indicated to have been met for each study type.

    Here’s a consolidated table, inferring the acceptance goals from the reported measurements and statistical analyses (e.g., correlation coefficients, bias estimates, precision, sensitivity, specificity).

    Performance Metric CategorySpecific Metric / TestAcceptance Criteria (Implicit from "All results met...")Reported Device Performance and Results (Alinity h-series vs. Sysmex® XN-Series where applicable)
    Method ComparisonPassing-Bablok/Deming Regression Analysis (r, Slope, Intercept), Bias at Critical PointsAcceptable correlation (high r), slope near 1, intercept near 0; Bias within predefined limitsCorrelation Coefficient (r): Mostly 0.95 - 1.00 (with some exceptions like %B, %IG, BASO, MCHC, %rP)
    Slope: Mostly ~1.00
    Intercept: Mostly ~0.00
    Bias: Reported for various measurands at critical points, and stated to be "within the predefined acceptance criteria and found to be acceptable." Sample sizes range from 1545 to 2006 for different measurands
    Sensitivity & Specificity (Morphological/Distributional Abnormalities)Sensitivity (for flags)Within predefined acceptance criteriaSensitivity: 80.19% (95% CI: 75.38%, 84.43%)
    Specificity (for flags)Within predefined acceptance criteriaSpecificity: 76.40% (95% CI: 71.64%, 80.72%)
    Efficiency (for flags)Within predefined acceptance criteriaEfficiency: 78.19% (95% CI: 74.88%, 81.25%)
    Precision (Repeatability)Mean, SD, CV, 95% CI for various measurandsCVs and SDs within predefined acceptance criteriaPresented for 32 unique donors (16 normal, 16 pathological), stating "All results met the predefined acceptance criteria."
    System ReproducibilityBetween-run, Between-day, Within-laboratory, Betweendevice variance componentsCVs and SDs within predefined acceptance criteriaPresented for control levels (low, normal, high), stating "All results met the predefined acceptance criteria."
    LinearityLinear Range for key measurands (RBC, HGB, NRBC, WBC, PLT, RETIC)Demonstrated linearity across specified rangesRBC: 0.00 – 8.08 x 10^6 /μL
    HGB: 0.04 – 24.14 g/dL
    NRBC: 0.00 – 26.10 x 10^3 /μL
    WBC: 0.00 – 449 x 10^3 /μL
    PLT: 0.06 – 5325 x 10^3 /μL
    RETIC: 0.05 – 644 x 10^3 /μL. Stated "All results met the predefined acceptance criteria."
    CarryoverAbsence of significant carryoverWithin predefined acceptance criteriaStated "All results met the predefined acceptance criteria."
    Interfering SubstancesImpact of hemoglobin, triglycerides, bilirubin, cholesterol, elevated WBCs/RBCs/PLTs, microcytic RBCsMeasurands impacted should remain within acceptable limits or flags triggered appropriatelyIdentified specific measurands impacted by certain interferents. Stated "All results met the predefined acceptance criteria."
    Limits of Blank, Detection, Quantitation (LoB, LoD, LoQ)Values for WBC, RBC, HGB, PLTWithin predefined acceptance criteriaWBC: LoB 0.01, LoD 0.02, LoQ 0.03
    RBC: LoB 0.00, LoD 0.01, LoQ 0.01
    HGB: LoB 0.08, LoD 0.11, LoQ 0.05
    PLT: LoB 0.15, LoD 0.38, LoQ 0.29. Stated "All results met the predefined acceptance criteria."
    Specimen StabilityStability of venous and capillary samples over time and temperatureMaintain stability for specified periods (e.g., up to 24-48 hours at specific temperatures)Supports information provided in system labeling for venous and capillary specimen stability.
    Anticoagulant Comparability (K3EDTA vs. K2EDTA)Mean difference or % difference, Regression analysisBias within predefined acceptance criteriaStated "All reportable parameters that were evaluated met their predefined bias acceptance criteria."
    Microtainer Capillary vs. Microtube for Automated Process (MAP) ComparabilityMean difference or % difference, Regression analysisBias within predefined acceptance criteriaStated "All reportable parameters that were evaluated met their predefined bias acceptance criteria."
    Matrix Comparability (Capillary vs. Venous)Mean difference or % difference, Regression analysisBias within predefined acceptance criteriaStated "All reportable parameters that were evaluated met their predefined bias acceptance criteria."
    Sample/Tube Processing Mode Comparability (Open vs. Closed)Mean difference or % difference, Regression analysisBias within predefined acceptance criteriaStated "All reportable parameters that were evaluated met their predefined bias acceptance criteria."
    Reference IntervalsEstablishment of adult & pediatric reference intervalsStatistically derived and verified reference intervalsEstablished for adult (> 21 yrs) and pediatric (≤ 21 yrs) populations, including subgroups for neonates, infants, children, and adolescents.

    2. Sample Size Used for the Test Set and Data Provenance

    • Method Comparison:

      • Sample Size: 2,194 unique venous and/or capillary specimens.
      • Data Provenance: Not explicitly stated by country, but mentions "7 clinical sites" which implies a multi-center study. The specimens were from pediatric (≤ 21 years) and adult (> 21 years) subjects, including a wide variety of disease states (clinical conditions). This indicates a prospective collection for the purpose of the study.
    • Sensitivity and Specificity:

      • Sample Size (N for analysis): 674 samples for the "Any Morphological Flags and/or Distributional Abnormalities" category. (Individual Ns for TP, FP, FN, TN are provided in the table).
      • Data Provenance: Not explicitly stated by country, but implies multiple sites as it refers to "All Sites Combined – Sensitivity and Specificity." Specimens were venous and capillary. No explicit mention of retrospective/prospective, but implies collection for the study.
    • Precision (Repeatability):

      • Sample Size: 32 unique donors (16 normal, 16 pathological).
      • Data Provenance: Not explicitly stated, implied prospective collection for the study.
    • System Reproducibility:

      • Sample Size: A single lot of Alinity h-series 29P Control (low, normal, high) tested across "three clinical sites" for 5 days with 3 runs/day and a minimum of 2 replicates/run. The N for calculations (e.g., WBC, Low) is 84.
      • Data Provenance: Not explicitly stated for country, but multi-site. Implied prospective testing.
    • Linearity:

      • Sample Size: 9 levels, 4 replicates each, for RBC, HGB, NRBC, WBC, PLT, RETIC.
      • Data Provenance: Not explicitly stated, implied prospective testing.
    • Carryover:

      • Sample Size: Minimum of 4 carryover runs at each of 4 testing sites (minimum 16 total runs per measurand) using high and low target specimens.
      • Data Provenance: Not explicitly stated for country, multi-site. Implied prospective testing.
    • Potentially Interfering Substances:

      • Sample Size: Samples tested for the presence of various interferents. No specific N provided.
      • Data Provenance: Not explicitly stated.
    • Limits of Blank, Detection, and Quantitation (LoB, LoD, LoQ):

      • Sample Size: Minimum of 3 days, 2 unique samples/day, 2 test selections, 5 replicates, 2 reagent lots. No specific overall N provided.
      • Data Provenance: Not explicitly stated.
    • Specimen Stability:

      • Sample Size: Minimum of 10 abnormal and 10 normal venous specimens (K2EDTA/K3EDTA); minimum of 20 normal capillary specimens (K2EDTA/K3EDTA).
      • Data Provenance: Not explicitly stated.
    • Anticoagulant Comparability (K3EDTA vs. K2EDTA):

      • Sample Size: 199 unique adult and pediatric donor sets.
      • Data Provenance: Not explicitly stated.
    • Microtainer Capillary vs. Microtube for Automated Process (MAP) Comparability:

      • Sample Size: 44 unique donor sets (normal whole blood specimens).
      • Data Provenance: Not explicitly stated.
    • Matrix Comparability (Capillary vs. Venous):

      • Sample Size: 76 unique venous and capillary donor sets (normal and abnormal).
      • Data Provenance: Not explicitly stated.
    • Sample/Tube Processing Mode Comparability (Open vs. Closed Mode):

      • Sample Size: 226 unique venous specimens.
      • Data Provenance: Not explicitly stated.
    • Reference Intervals:

      • Adults: 261 unique venous and 1 capillary whole blood specimens from 126 male and 136 female adult subjects.
      • Pediatric: 360 venous or capillary specimens (61 neonates, 68 infants, 109 children, 122 adolescents).
      • Data Provenance: Not explicitly stated for country. "Apparently healthy subjects."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    The document does not mention the use of experts to establish ground truth for the test set in the context of diagnostic interpretation (e.g., classifying morphological flags from blood films). The "ground truth" for the analytical performance studies (e.g., method comparison, precision) appears to be derived from the predicate device (Sysmex® XN-Series) or established analytical methods.

    For the Sensitivity & Specificity study related to identifying distributional abnormalities and morphological flags using blood films, it states the study was "assessed by identifying distributional abnormalities and morphological flags using blood films." This implies a reference method of manual microscopy for validation, which typically involves expert review. However, the number and qualification of experts used to establish this blood film ground truth are not specified in this summary.


    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (like 2+1, 3+1, or none) for establishing ground truth for any of the test sets mentioned (e.g., for morphological flags or disease classification). For analytical studies, the comparative method (predicate device) serves as the reference, not an adjudicated panel.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted or reported in this 510(k) summary. This document focuses on the analytical performance of the Alinity h-series System itself (an automated analyzer), demonstrating its substantial equivalence to a predicate automated analyzer. It is not an AI-assisted diagnostic device where human reader improvement would be a relevant metric.


    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the studies presented primarily represent the standalone performance of the Alinity h-series System. The "system" (Alinity hq analyzer module) performs quantitative measurements and differentiates blood cells without human intervention in the analytical process itself. The data reported (correlation, bias, precision, sensitivity/specificity of flags, linearity, etc.) reflect the device's performance as a standalone automated analyzer. The "sensitivity and specificity" study, while relying indirectly on blood film assessment (a human process) for its ground truth, measures the device's ability to trigger flags, which is a standalone function.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The type of ground truth varies by study:

    • Method Comparison: The predicate device (Sysmex® XN-Series) served as the reference or "ground truth" for comparison purposes.
    • Sensitivity & Specificity: This likely used manual microscopy via blood film review as the reference standard for identifying morphological and distributional abnormalities. The summary does not specify if this involved expert consensus or a single expert's reading.
    • Precision, Reproducibility, Linearity, Carryover, LoB/LoD/LoQ determinations: These technical performance characteristics typically use controlled samples or reference materials with known values, or statistical calculations comparing device measurements to themselves.
    • Interfering Substances, Specimen Stability, Anticoagulant/Matrix/Mode Comparability: These studies assess the device's performance under various conditions against its own baseline or expected performance, or against a comparative sample.
    • Reference Intervals: Established by testing apparently healthy subjects and using statistical methods to define a range of typical values.

    8. The sample size for the training set

    The document does not explicitly mention a "training set" in the context of an algorithm or AI model development. Since the Alinity h-series System is described as using "flow cytometry and absorption spectrophotometry technologies," it's likely a rule-based or conventional instrument, not primarily an AI/ML-driven device requiring a distinct "training set" in the modern sense. The "rules based rerun / reflex" mentioned under software/hardware (page 8) refers to automated decision logic, not necessarily a machine learning model that undergoes training on a large dataset. Therefore, the concept of a "training set" as it relates to AI/ML is not directly applicable or discussed here.


    9. How the ground truth for the training set was established

    As there is no explicit mention of a "training set" for an AI/ML algorithm in this 510(k) summary, how its ground truth was established is not detailed. The device primarily relies on established analytical principles of flow cytometry and spectrophotometry.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1