K Number
K171655
Date Cleared
2018-03-02

(270 days)

Product Code
Regulation Number
864.5220
Panel
HE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The cobas m 511 integrated hematology analyzer is a quantitative, automated analyzer with cell locating capability. It is intended for in vitro diagnostic use by a skilled operator in the clinical laboratory. The system prepares a stained microscope slide from EDTA-anticoagulated whole blood. It utilizes computer imaging to count the formed elements of blood and provide an image-based assessment of cell morphology, which may be reviewed by the operator, and also allows for manual classification of unclassified cells. The instrument reports the following parameters: RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, %NRBC, #NRBC, WBC, %NEUT, #NEUT, %LYMPH, #LYMPH, %MONO, #MONO, %EO, #EO, %BASO, #BASO, PLT, MPV, %RET, #RET, HGB-RET.

Device Description

The cobas m 511 system is a fully automated stand-alone hematology analyzer with integrated slide making capability and digital cell imaging. It provides a complete blood count, 5-part differential, and reticulocvte enumeration of samples of whole blood collected in K2 or K3 EDTA. It is designed for high throughput in the clinical laboratory environment.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text.

Device: cobas m 511 integrated hematology analyzer
Predicate Devices: Sysmex® XN-Series (XN-10, XN-20) Automated Hematology Analyzer and CellaVision® DM1200 Automated Hematology Analyzer

It's important to note that the provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to predicate devices rather than directly outlining the acceptance criteria in a traditional sense (e.g., "sensitivity must be > X%, specificity > Y%"). Instead, the document describes performance studies and states that results "met acceptance criteria" or "were found to be acceptable," implying that specific internal acceptance criteria were used for each test. I will extract the performance results where available, and indicate where quantitative acceptance criteria are not explicitly stated but implied to have been met.


1. Table of Acceptance Criteria and Reported Device Performance

Note: The document does not explicitly present a table of acceptance criteria. Instead, it presents performance data and states that "acceptance criteria were met." Where quantitative values are provided, they represent the reported device performance which implicitly met the underlying, but unstated, acceptance criteria for FDA clearance.

Performance CharacteristicAcceptance Criteria (Implied / Stated)Reported Device Performance
Method Comparison (vs. Predicate Device)Correlation and bias "found to be acceptable for all reportable parameters" based on CLSI EP09-A3 guidelines.(See Table 3 in original document for full details. Below are examples for key parameters.)
WBC [10³/µL]Implied to be acceptable.Pearson's (r) = 0.999; Intercept = 0.02; Slope = 1.012; Bias at low/high limits: 0.03 - 0.07 [10³/µL], 1.26 - 1.7 [%].
RBC [10⁶/µL]Implied to be acceptable.Pearson's (r) = 0.974; Intercept = 0.02; Slope = 0.991; Bias at low/high limits: -0.41 - -0.56.
HGB [g/dL]Implied to be acceptable.Pearson's (r) = 0.970; Intercept = -0.33; Slope = 1.046; Bias at low/high limits: -0.12 - 0.14 [g/dL], 1.35 - 3.08 [%].
PLT [10³/µL]Implied to be acceptable.Pearson's (r) = 0.973; Intercept = -11.03; Slope = 1.020; Bias at low/high limits: -10.83 - 0.88.
%NEUT [%]Implied to be acceptable.Pearson's (r) = 0.989; Intercept = 1.62; Slope = 1.012; Bias at low/high limits: 3.06 - 5.21.
%LYMPH [%]Implied to be acceptable.Pearson's (r) = 0.989; Intercept = -0.23; Slope = 0.977; Bias at low/high limits: -2.66 - -0.81.
Flagging Capabilities (Clinical Sensitivity & Specificity)For WBC messages (flags) vs. 400-cell reference method: met acceptance criteria for sensitivity and specificity.Sensitivity = 92.9% (118/(118+9))
Specificity = 96.8% (302/(302+10))
Precision (Repeatability - within-run)"Repeatability results met their pre-defined acceptance criteria." (Based on CLSI EP05-A3 and H26-A2 standards).(See Table 5 in original document for full details. Examples for WBC & PLT below.)
WBC [10³/μL] (All samples)Implied to be acceptable.Mean of Sample Means: 12.06; SD: 0.233; %CV: 1.93.
PLT [10³/μL] (All samples)Implied to be acceptable.Mean of Sample Means: 246.77; SD: 6.749; %CV: 2.73.
Reproducibility (Total Precision)"Reproducibility for the three (3) levels of DigiMAC3 controls was calculated and found to be acceptable for all sites combined for all reportable parameters." (Consistent with CLSI EP05-A3).(See Table 6 in original document for full details. Examples for WBC & PLT below.)
WBC [10³/µL] (L1 control)Implied to be acceptable.Mean: 16.93; Total (Reproducibility) SD: 0.404; %CV: 2.39.
PLT [10³/µL] (L1 control)Implied to be acceptable.Mean: 470.53; Total (Reproducibility) SD: 8.090; %CV: 1.72.
LinearityDemonstrated to be linear from lower to upper limit; "all results met acceptance criteria." (Based on CLSI EP06-A).(See Table 7 in original document for examples showing "Maximum Absolute Deviation (Relative)" met allowed deviation. e.g., WBC System 1: 0% dev (8%) vs. 0.5% allowed (15%))
Carryover"Carryover results for the cobas m 511 system met acceptance criteria." (Based on ICSH guidelines and CLSI H26-A2).White Blood Cells: 0.000%
Red Blood Cells: 0.000%
Platelets: 0.001%
Blasts: 0.000%
Interfering SubstancesNo significant interference effects up to specific concentrations, except for noted thresholds (HGB/HCT with hemolysis, WBC/#LYMPH with lipemia).Unconjugated/Conjugated Bilirubin: No significant effects up to 40 mg/dL.
Hemolysis: No significant effects up to 1000 mg/dL, except HGB ≥ 672 mg/dL & HCT ≥ 792 mg/dL.
Lipemia: No significant effects up to 3000 mg/dL, except WBC ≥ 1646 mg/dL & #LYMPH ≥ 2459 mg/dL.
High WBC/PLT conc.: No significant effects up to 100.2 x 10⁹/uL (WBC) and 1166 x 10³/uL (PLT).
Specimen Stability"The combined results demonstrated stability for normal and abnormal samples up to or beyond twenty-four (24) hours."Samples stable up to and beyond 24 hours at ambient (15°C-25°C) and refrigerated (2℃-8℃).
Anticoagulant Comparison"All acceptance criteria were met, demonstrating equivalency of results obtained from samples collected into K2 EDTA and K3 EDTA."Equivalence established.
Venous and Capillary Blood Method Comparisons"Overall, the data demonstrate comparable results between venous and capillary blood processed on the cobas m 511 system."Comparability established.
Mode to Mode Analysis"The results were found to be acceptable in that all twenty-six (26) reportable parameters that were evaluated met acceptance criteria."Acceptance criteria met.
Limit of Blank (LoB), Limit of Detection (LoD), and Limit of Quantitation (LoQ)(Based on CLSI EP17-A2 guidelines).WBC: LoB = 0.05 x 10³/uL, LoD = 0.08 x 10³/uL, LoQ = 0.24 x 10³/uL
PLT: LoB = 1 x 10³/uL, LoD = 3 x 10³/uL, LoQ = 6 x 10³/uL
Reference Intervals"Normal reference ranges for adult and pediatric cohorts are consistent with those in the published literature."Established for adult males, adult females, and six pediatric subgroups.

2. Sample Sizes and Data Provenance

  • Test Set Sample Size:
    • Method Comparison: 1859-1864 samples (exact number varies slightly by parameter, presumably due to valid data points for each). Sample collection lasted "a minimum of two (2) weeks at each of four (4) clinical sites."
    • Flagging Capabilities: 439 samples (for WBC flags).
    • Precision (Repeatability): 144 samples (for primary parameters), with 31-143 individual samples processed for different parameter groups. Total observations were 4436 (for WBC, RBC, HGB, etc.) and 4405 (for PLT, MPV, %NRBC, etc.) derived from 31 consecutive runs.
    • Reproducibility: 120 observations per control level (for 3 control levels per parameter).
    • Linearity: Not explicitly stated as a single number but involved serial dilutions run for 6 replicates (5 for reticulocytes) on multiple systems.
    • Carryover: 12 independent carryover experiments.
    • Interfering Substances: Dose-response experiments using 6 incremental concentration samples for each substance.
    • Specimen Stability: 31 normal samples, 14 abnormal samples.
    • Anticoagulant Comparison: 44 healthy donor samples, 40 residual abnormal samples.
    • Venous and Capillary Blood Method Comparisons: 40 healthy donor samples, 40 residual abnormal capillary samples.
    • Mode to Mode Analysis: Not a specified sample size number, but compared results from closed-tube vs. open-tube modes.
    • LoB, LoD, LoQ: "three (3) individual test days" for each.
  • Data Provenance: The studies were conducted at four (4) clinical sites. The document does not specify the country of origin of these sites but implies they are clinical laboratories. The studies are prospective in nature, as they involve testing samples on the newly developed cobas m 511 system and comparison to predicate devices/reference methods. The samples were "residual whole blood samples" (for some studies) or collected specifically for the studies ("from healthy volunteer donors," "apheresis samples").

3. Number of Experts and Qualifications for Ground Truth

  • Expert Usage: For Flagging Capabilities, the "400-cell reference method" refers to the combined results from two (2) 200-cell WBC differentials performed by individuals on two (2) separate blood smears. For Carryover, "slides from the LTV serum samples were reviewed by an external hematopathologist to determine cell carryover."
  • Number of Experts: At least two individuals performed the 200-cell WBC differentials for the flagging study. At least one external hematopathologist was used for the carryover study.
  • Qualifications: "Skilled operator in the clinical laboratory" is mentioned as the intended user. For the flagging study, "individuals" are implied to be laboratory professionals trained in WBC differentials. The carryover study explicitly mentions an "external hematopathologist," implying a medical doctor specializing in laboratory hematology, which suggests a high level of expertise.

4. Adjudication Method for the Test Set

  • Flagging Capabilities: The "400-cell reference method" involved two separate 200-cell differentials. The combination of these two suggests a form of consensus or combined result, but specific adjudication rules (e.g., if discrepancies, a third reader decides) are not detailed. It's presented as a direct summation or combination of two expert reads.
  • Other Studies: For quantitative parameter comparisons, the ground truth for the "reference method" (often the predicate device or a standardized laboratory method) is assumed to be the established truth, and formal adjudication among multiple readers is not explicitly mentioned as being part of the process for the device's numerical outputs.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text for comparing human reader performance with and without AI assistance. The device is an automated analyzer, and its primary comparison is to another automated analyzer (Sysmex XN-Series) and an automated cell locating device that assists a human operator (CellaVision DM1200).
  • The document implies that the cobas m 511 system provides images that may be reviewed by the operator and allows for manual classification of unclassified cells (%NEUT, %LYMPH, etc. are determined by the instrument). The comparison for "Flagging Capabilities" involves the device's flags against a human expert reference method, highlighting the device's ability to trigger review. However, this is not a study measuring the improvement of human readers with AI assistance.
  • Therefore, an effect size of how much human readers improve with AI vs. without AI assistance is not provided as such a study was not the focus of this 510(k) submission.

6. Standalone Performance Study

  • Yes, standalone performance (algorithm only without human-in-the loop performance) was extensively done. The "Analytical Performance" section (5.1) details multiple studies of the device's performance in measuring various blood parameters independently.
    • Method Comparison: Compares the device's readings against a predicate device.
    • Precision (Repeatability and Reproducibility): Measures the device's consistency.
    • Linearity: Assesses the device's accuracy across its measuring range.
    • Carryover, Interfering Substances, Specimen Stability, LoB/LoD/LoQ: All measure the inherent performance characteristics of the automated analyzer itself.
  • The device "utilizes computer imaging to count the formed elements of blood and provide an image-based assessment of cell morphology," and "reports the following parameters: RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, %NRBC, #NRBC, WBC, %NEUT, #NEUT, %LYMPH, #LYMPH, %MONO, #MONO, %EO, #EO, %BASO, #BASO, PLT, MPV, %RET, #RET, HGB-RET." These are all automated measurements.

7. Type of Ground Truth Used

The type of ground truth used varies by study:

  • Method Comparison: The predicate device's measurements (Sysmex XN-Series) served as the comparator/ground truth for quantitative parameters.
  • Flagging Capabilities: A 400-cell reference method which involved expert consensus (two individuals performing 200-cell WBC differentials on separate smears) was used as ground truth for WBC flagging. This represents a type of expert consensus based on microscopy.
  • Precision and Reproducibility: Standardized quality control materials (DigiMAC3 controls) with known values, and repeated measurements of patient samples across ranges.
  • Carryover: Expert review by an external hematopathologist of slides to confirm cell carryover.
  • Linearity, Interfering Substances, Stability, LoB/LoD/LoQ: These studies establish the device's intrinsic performance characteristics, often relative to expected values or reference methods for their respective tests, rather than a single 'ground truth' in the diagnostic sense. For linearity, prepared samples with known (or expected) concentrations are typically used.
  • Reference Intervals: Based on statistical analysis of samples from normal healthy donors and comparison to published literature.

8. Sample Size for the Training Set

  • The document does not report the sample size for the training set for the AI/computer imaging components. This 510(k) summary focuses on the validation of the final device rather than its development. Details about the training data used for the "proprietary imaging algorithms" are typically considered proprietary and not required in a public 510(k) summary.

9. How the Ground Truth for the Training Set Was Established

  • Since the training set size and details are not provided, the method for establishing ground truth for the training set is also not discussed in this document. It is highly probable, given the nature of the device, that ground truth for training would involve extensive manual expert review and classification of blood cell images by trained morphologists or hematopathologists.

§ 864.5220 Automated differential cell counter.

(a)
Identification. An automated differential cell counter is a device used to identify one or more of the formed elements of the blood. The device may also have the capability to flag, count, or classify immature or abnormal hematopoietic cells of the blood, bone marrow, or other body fluids. These devices may combine an electronic particle counting method, optical method, or a flow cytometric method utilizing monoclonal CD (cluster designation) markers. The device includes accessory CD markers.(b)
Classification. Class II (special controls). The special control for this device is the FDA document entitled “Class II Special Controls Guidance Document: Premarket Notifications for Automated Differential Cell Counters for Immature or Abnormal Blood Cells; Final Guidance for Industry and FDA.”