(126 days)
The Alinity h-series System is an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs) intended for screening patient populations found in clinical laboratories by qualified health care professionals. The Alinity h-series can be configured as:
· One stand-alone automated hematology analyzer system.
· A multimodule system that includes at least one Alinity hg analyzer module and may include one Alinity hs slide maker stainer module.
The Alinity hq analyzer module provides complete blood count and a 6-part white blood cell differential for normal and abnormal cells in capillary and venous whole blood collected in K2EDTA. The Alinity hq analyzer provides quantitative results for the following measurands: WBC, NEU, %N, LYM, %M, EOS, %E, BASO, %B, IG, %IG, RBC, HCT, HGB, MCV, MCH, MCHC, MCHr, RDW, NRBC, NR/W, RETIC, %R, IRF, PLT, MPV, %rP. The Almity hq analyzer module is indicated to identify patients with hematologic parameters within and outside of established reference ranges. The Alinity hs slide maker stainer module automates whole blood film preparation and staining and stains externally prepared whole blood smears.
For in vitro diagnostic use.
The Alinity h-series System is a multimodule system that consists of different combinations of one or more of the following modules: a quantitative multi-parameter automated hematology analyzer (Alinity hg) and an automated slide maker stainer (Alinity hs).
The Alinity hq is a quantitative, multi-parameter, automated hematology analyzer designed for in vitro diagnostic use in counting and characterizing blood cells using a multi-angle polarized scattered separation (MAPSS) method to detect and count red blood cells (RBC), nucleated red blood cells (NRBC), platelets (PLT), and white blood cells (WBC), and to perform WBC differentials (DIFF) in whole blood.
There is also an option to choose whether to detect reticulocytes (RETIC) at the same time. The options of the selections are:
- CBC+DIFF: Complete blood count with differential
- CBC+DIFF+RETIC: Complete blood count with differential and reticulocyte ●
The Alinity h-series of instruments has a scalable design to provide full integration of multiple automated hematology analyzers that can include the integration of an automated blood film preparation and staining module, all of which are controlled by one user interface. The modules are designed to fit together. Each module has an internal conveyor that enables racks of specimen tubes to be transported between modules. The system can move racks between modules to perform different tests on a given specimen (e.g., make slide smears on the Alinity hs).
An Alinity h-series system can be configured as follows:
- Configuration 1: 1 (Alinity hq) + 0 (Alinity hs) = 1+0
- . Configuration 2: 1 (Alinity hq) + 1 (Alinity hs) = 1+1
- . Configuration 3: 2 (Alinity hq) + 0 (Alinity hs) = 2+0
- . Configuration 4: 2 (Alinity hq) + 1 (Alinity hs) = 2+1
Here's a breakdown of the acceptance criteria and study information for the Alinity h-series System, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state all acceptance criteria in a dedicated table format with clear "criteria" vs. "performance" columns for every test. However, it does mention that all results "met the predefined acceptance criteria" for various tests. The tables provided show the device's performance, and the accompanying text confirms it met criteria.
Here's a consolidated view of relevant performance data presented, with implicit acceptance criteria being that the results are within acceptable ranges for clinical diagnostic instruments, or that they demonstrate improvement as intended by the software modification.
Test Category | Measurand | Reported Device Performance (Subject Device SW 5.8) | Acceptance Criteria (Implicit, based on "met predefined acceptance criteria" statements) |
---|---|---|---|
Precision (Normal Samples) | BASO ($\times 10^3/\mu L$) | CBC+Diff: 0.021 SD (Range 0.01 to 0.12); CBC+Diff+Retic: 0.025 SD (Range 0.01 to 0.11) | SD/%CV point estimates to be within predefined limits. (Explicitly stated: "All samples were evaluated against all applicable acceptance criteria and met all acceptance criteria.") |
%BASO (%) | CBC+Diff: 0.352 SD, 41.04 %CV (Range 0.13 to 2.20); CBC+Diff+Retic: 0.455 SD, 41.08 %CV (Range 0.13 to 2.00) | ||
LYM ($\times 10^3/\mu L$) | CBC+Diff: 0.068 SD (Range 1.10 to 2.01), 3.09 %CV (Range 1.94 to 3.05); CBC+Diff+Retic: 0.063 SD (Range 1.10 to 2.01), 3.17 %CV (Range 1.91 to 3.07) | ||
%LYM (%) | CBC+Diff: 1.239 SD, 3.34 %CV (Range 13.80 to 57.80); CBC+Diff+Retic: 1.193 SD, 3.63 %CV (Range 13.40 to 58.10) | ||
WBC ($\times 10^3/\mu L$) | CBC+Diff: 0.068 SD (Range 3.72 to 4.06), 2.71 %CV (Range 3.92 to 10.60); CBC+Diff+Retic: 0.085 SD (Range 3.72 to 4.04), 2.22 %CV (Range 3.93 to 10.40) | ||
Precision (Pathological Samples) | WBC ($\times 10^3/\mu L$) | Low: 0.083 SD (Range 0.06 to 2.01); High: 1.88 %CV (Range 41.40 to 209.00) | SD or %CV point estimates to be within predefined limits. (Explicitly stated: "All results met the predefined acceptance criteria, demonstrating acceptable short-term precision...") |
BASO ($\times 10^3/\mu L$) | Low WBC Related: 0.010 SD (Range 0.00 to 0.04) | ||
LYM ($\times 10^3/\mu L$) | Low WBC Related: 0.040 SD (Range 0.12 to 0.74) | ||
Linearity | WBC | Overall Linearity Range: (0.00 to 448.58) $\times 10^3/\mu L$ | All results met the predefined acceptance criteria and were determined to be acceptable. |
Method Comparison (vs. Sysmex XN-10) | BASO ($\times 10^3/\mu L$) | r: 0.26 (0.22, 0.30); Slope: 1.25 (1.20, 1.30); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1812) | Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..." |
%BASO (%) | r: 0.44 (0.40, 0.48); Slope: 1.44 (1.39, 1.50); Intercept: -0.12 (-0.14, -0.09). (Sample Range 0.00 - 8.37, N=1812) | ||
LYM ($\times 10^3/\mu L$) | r: 0.99 (0.99, 0.99); Slope: 0.99 (0.99, 1.00); Intercept: 0.02 (0.01, 0.02). (Sample Range 0.05 - 8.34, N=1598) | ||
%LYM (%) | r: 1.00 (1.00, 1.00); Slope: 1.00 (0.99, 1.00); Intercept: 0.04 (0.04, 0.15). (Sample Range 0.34 - 84.60, N=1598) | ||
WBC ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1958) | ||
Method Comparison (vs. Predicate Device SW 5.0) | BASO ($\times 10^3/\mu L$) | r: 0.75 (0.73, 0.77); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1801) | Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..." |
%BASO (%) | r: 0.92 (0.91, 0.92); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 8.37, N=1801) | ||
LYM ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.05 - 8.34, N=1589) | ||
%LYM (%) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.34 - 84.60, N=1589) | ||
WBC ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1948) | ||
Clinical Sensitivity/Specificity | Any Morphological Flags | Sensitivity: 67.57% (58.03%, 76.15%); Specificity: 77.55% (73.79%, 81.01%); Efficiency: 75.85% (72.37%, 79.09%). (N=650) | Met predefined "acceptance criteria" (not explicitly given numerical targets, but stated as met). |
Any Distributional Abnormalities | Sensitivity: 83.02% (77.95%, 87.34%); Specificity: 80.59% (76.20%, 84.49%); Efficiency: 81.60% (78.37%, 84.54%). (N=636) | ||
Any Morphological and/or Distributional Abnormalities | Sensitivity: 80.98% (76.12%, 85.23%); Specificity: 76.09% (71.22%, 80.51%); Efficiency: 78.40% (75.02%, 81.51%). (N=648) | ||
Reference Range Verification | All measurands | Upper bound of 95% CI for percentage of replayed results within predicate reference ranges was $\ge$ 95%. | Upper bound of the two-sided 95% CI for the percentage of replayed results that were within the reference ranges of the predicate device was $\ge$ 95%. (Explicitly stated as met). |
Specific Improvement for Affected BASO Samples | BASO ($\times 10^3/\mu L$) | Subject Device: r: 0.84 (0.75, 0.90); Slope: 1.17 (1.00, 1.32); Intercept: 0.00 (-0.01, 0.01). (Range 0.00 - 1.69, N=67). Predicate Device: r: 0.93 (0.90, 0.96); Slope: 2.22 (1.64, 2.80); Intercept: -0.01 (-0.05, 0.02). (Range 0.03 - 8.11, N=67) Demonstrates reduction of falsely increased basophils. | Results for potentially impacted measurands (BASO and %BASO) must demonstrate reduction of falsely increased basophil measurements. "Additionally, the results demonstrate a reduction in the number of false positive sample %BASO classifications..." |
%BASO (%) | Subject Device: r: 0.61 (0.44, 0.75); Slope: 1.22 (0.98, 1.52); Intercept: -0.08 (-0.39, 0.19). (Range 0.00 - 4.33, N=67). Predicate Device: r: 0.33 (0.10, 0.53); Slope: 0.54 (0.31, 0.83); Intercept: 1.83 (1.45, 2.05). (Range 2.00 - 4.49, N=67). Demonstrates reduction of falsely increased basophils. |
2. Sample Size Used for the Test Set and Data Provenance
The "test set" for this submission largely refers to re-analyzing raw data from the predicate device's (K220031) submission using the new algorithm.
- For Precision Studies (Normal Samples):
- Sample Size: 20 unique healthy donors for CBC+Diff, 19 for CBC+Diff+Retic.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they are described as "healthy."
- For Precision Studies (Pathological Samples and Medical Decision Levels):
- Sample Size: Minimum of 16 donors per measurand and range, with a minimum of 4 repeatability samples per measurand and range (2 for CBC+Diff, 2 for CBC+Diff+Retic).
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they include "abnormal whole blood samples."
- For Linearity:
- Sample Size: RBC, HGB, NRBC used whole blood samples; WBC, PLT, RETIC used commercially available linearity kits. A minimum of 9 levels were prepared for each measurand.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Whole blood samples and commercial kits.
- For Method Comparison Study (Subject Device vs. Predicate Device K220031 and Sysmex XN-10):
- Sample Size: 2,194 unique venous and/or capillary specimens. 1,528 specimens from subjects with medical conditions, 244 without.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but collected across 7 clinical sites, representing a "wide variety of disease states (clinical conditions)" and "wide range of demographics (age and sex)."
- Specific "affected samples" for basophil analysis: 67 samples.
- For Specimen Stability Studies:
- K2EDTA Venous & Capillary Whole Blood: 14 unique native venous, 30 unique native capillary.
- Controlled Room Temp K2EDTA Venous & Capillary Whole Blood: 10 K2EDTA venous from healthy donors, 10 abnormal de-identified leftover K2EDTA venous, 20 normal K2EDTA capillary.
- K3EDTA Venous Whole Blood: 14 unique native venous.
- K3EDTA Capillary Whole Blood: 94 unique native capillary.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Samples from "apparently healthy donors" and "abnormal de-identified leftover" samples.
- For Detection Limit:
- Sample Size: 2 unique samples per day over a minimum of 3 days.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed).
- For Clinical Sensitivity/Specificity:
- Sample Size: A subset of 674 venous and capillary whole blood specimens from the method comparison study.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Collected from 6 clinical sites.
- For Reference Range Verification:
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Clinical Sensitivity/Specificity Study:
- Number of Experts: Two independent 200-cell microscopic reviews were performed. So, at least two experts per sample.
- Qualifications: Not explicitly stated beyond "microscopic reviews of a blood smear (reference method)." It can be inferred these would be qualified laboratory professionals (e.g., medical technologists, clinical laboratory scientists) with expertise in manual differential counting, but specific years of experience or board certifications are not provided.
- Ground Truth: The "final WBC differential and WBC, RBC, and PLT morphology results were based on the 400-cell WBC differential counts derived from the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results..."
4. Adjudication Method for the Test Set
- Clinical Sensitivity/Specificity Study: The ground truth was based on "the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results." This indicates an agreement-based adjudication method, likely a 2-of-2 consensus. If the two initial reviews did not concur, a third review/adjudication might have been employed (e.g., 2+1), but this is not explicitly stated. The phrasing "concurring 200-cell differential counts" strongly suggests initial agreement was required.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI assistance.
- This submission describes an automated differential cell counter (Alinity h-series System) which is a standalone device for providing results without human assistance in the interpretation of the primary measurements (though human review of flags/smears may occur downstream).
- The study primarily focuses on comparing the output of the device with its new algorithm (SW 5.8) to its previous version (SW 5.0) and to a predicate device (Sysmex XN-10). The clinical sensitivity/specificity study compares the device's algorithmic flags/differentials to microscopic analysis (human experts), but this is not an assistance study but rather a standalone performance evaluation against a gold standard.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance evaluation was primarily done. The core of this submission is about a software algorithm modification within an automated analyzer.
- All analytical performance studies (precision, linearity, detection limits, stability) and method comparison studies (against the predicate and Sysmex XN-10) evaluate the Alinity hq (with the new algorithm) as a standalone instrument.
- The clinical sensitivity and specificity study also evaluates the Alinity hq's ability to identify abnormalities and morphological flags independently, comparing its output directly to expert microscopic review (ground truth). There's no mention of a human-in-the-loop scenario where humans are presented with AI results to improve their performance.
7. The Type of Ground Truth Used
- Expert Consensus (Microscopy): For the clinical sensitivity/specificity study, the ground truth for WBC differentials and morphological flags was established by manual microscopic review (400-cell differential) by two independent experts, with results based on their concurrence. This is a form of expert consensus.
- Reference Methods/Device: For analytical performance and method comparison studies, the ground truth was established by:
- The Alinity hq with its previous software version (K220031) (for direct comparison to the subject device's new software).
- Another legally marketed predicate device (Sysmex XN-Series (XN-10) Automated Hematology Analyzer K112605).
- Known concentrations/values in control materials or linearity kits.
- Clinically accepted laboratory practices and norms (e.g., for precision, stability).
8. The Sample Size for the Training Set
- The document does not provide information on the sample size used for the training set for the algorithm modification. Since this submission describes a modification to an existing algorithm ("modified logic for counting basophils"), it's possible the training was done prior to the original K220031 submission, or that new data was used for a specific refinement without explicit detail in this summary. The focus of this 510(k) is the evaluation of the modified algorithm on existing data and its performance compared to predicates, not the development process of the algorithm itself.
9. How the Ground Truth for the Training Set Was Established
- As the training set details are not provided, the method for establishing its ground truth is also not elaborated upon in this 510(k) summary. Given the nature of a hematology analyzer, it would typically involve meticulously characterized blood samples, often with manual microscopic differentials, flow cytometry reference methods, or other gold standards, aligned with clinical laboratory guidelines.
§ 864.5220 Automated differential cell counter.
(a)
Identification. An automated differential cell counter is a device used to identify one or more of the formed elements of the blood. The device may also have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood, bone marrow, or other body fluids. These devices may combine an electronic particle counting method, optical method, or a flow cytometric method utilizing monoclonal CD (cluster designation) markers. The device includes accessory CD markers.(b)
Classification. Class II (special controls). The special control for this device is the FDA document entitled “Class II Special Controls Guidance Document: Premarket Notifications for Automated Differential Cell Counters for Immature or Abnormal Blood Cells; Final Guidance for Industry and FDA.”