Search Results
Found 198 results
510(k) Data Aggregation
Trade/Device Name: Sysmex XR-Series (XR-10) Automated Hematology Analyzer
Regulation Number: 21 CFR 864.5220
Analyzer
Regulation Description: Automated Differential Cell Counter
Regulation Section: 21 CFR 864.5220
The XR-Series module (XR-10) is a quantitative multi-parameter automated hematology analyzer intended for in vitro diagnostic use in screening patient populations found in clinical laboratories.
The XR-Series module classifies and enumerates the following parameters in whole blood: WBC, RBC, HGB, HCT, MCV, MCH, MCHC, PLT (PLT-I, PLT-F), NEUT%/#, LYMPH%/#, MONO%/#, EO%/#, BASO%/#, IG%/#, RDW-CV, RDW-SD, MPV, NRBC%/#, RET%/#, IPF, IPF#, IRF, RET-He and has a Body Fluid mode for body fluids. The Body Fluid mode enumerates the WBC-BF, RBC-BF, MN%/#, PMN%/#, and TC-BF# parameters in cerebrospinal fluid (CSF), serous fluids (peritoneal, pleural) and synovial fluids. Whole blood should be collected in K2EDTA or K3EDTA anticoagulant, and serous and synovial fluids in K2EDTA anticoagulant to prevent clotting of fluid. The use of anticoagulants with CSF specimens is neither required nor recommended.
The Sysmex XR-Series module (XR-10) is a quantitative multi-parameter hematology analyzer intended to perform tests on whole blood samples collected in K2 or K3EDTA and body fluids (pleural, peritoneal and synovial) collected in K2EDTA anticoagulant. The analyzers can also perform tests on CSF, which should not be collected in any anticoagulant. The XR-Series analyzer consist of four principal units: (1) One Main Units (XR-10) which aspirates, dilutes, mixes, and analyzes blood and body fluid samples; (2) Two Auto Sampler Units (SA-10, SA-01) which supply samples to the Main Unit automatically; (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system; (4) Pneumatic Unit which supplies pressure and vacuum from the Main Unit.
This document describes the acceptance criteria and the studies conducted to prove that the Sysmex XR-Series (XR-10) Automated Hematology Analyzer meets these criteria, demonstrating substantial equivalence to its predicate device, the Sysmex XN-20.
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) clearance letter does not explicitly present a neatly formatted table of acceptance criteria alongside the reported performance for all parameters. Instead, it describes various performance studies (Precision, Linearity, Analytical Specificity/Interferences, Sample Stability, Detection Limit, Carry-Over, Comparison Studies, Matrix Studies, Bridging Studies, Clinical Studies, and Expected Values/Reference Range) and states that the XR-10 met "manufacturer's specifications or predefined acceptance criteria requirements" for each.
However, based on the provided data, we can infer some general acceptance criteria, particularly from the method comparison study which uses correlation coefficient (r) and percent bias (%Bias) as metrics. The clinical sensitivity and specificity tables also present implied acceptance criteria based on the demonstrated performance.
Inferred Acceptance Criteria & Reported Performance (Selection of Key Metrics)
Study Type / Parameter Category | Acceptance Criteria (Inferred) | Reported Device Performance (Summary from text) |
---|---|---|
Whole Blood Precision (Analyte-specific %CV) | Met manufacturer's specifications or predefined acceptance criteria requirements. | WBC: 0.30% to 2.76% CV (Repeatability); 0.97% to 1.98% (Reproducibility, within run) |
RBC: 0.45% to 0.97% CV (Repeatability); 0.73% to 1.03% (Reproducibility, within run) | ||
HGB: 0.38% to 0.79% CV (Repeatability); 0.40% to 0.98% (Reproducibility, within run) | ||
PLT-I: 1.30% to 8.32% CV (Repeatability); 1.59% to 3.70% (Reproducibility, within run) | ||
Body Fluid Precision (Analyte-specific %CV) | Met manufacturer's specifications or predefined acceptance criteria requirements. | WBC-BF: 2.01% to 3.91% CV (Repeatability); 2.01% to 3.91% (Reproducibility, within run) |
RBC-BF: 1.87% to 3.49% CV (Repeatability); 1.87% to 3.49% (Reproducibility, within run) | ||
Linearity (Whole Blood & Body Fluid) | Linear from lower limit to upper limit and within measured maximum allowable deviation from linearity for each interval. (All results met predefined acceptance criteria). | WBC (WB): 0.03 – 440.00 x10³/μL |
RBC (WB): 0.01 – 8.60 x10⁶/μL | ||
HGB (WB): 0.1 – 26.0 g/dL | ||
PLT (WB): 2 – 5,000 x10³/μL | ||
WBC-BF: 0.003 – 10.000 x10³/μL | ||
Method Comparison (Whole Blood: r-value) | ≥0.95 (explicitly stated for HGB, implied for others) | WBC: 0.9997 |
RBC: 0.9900 | ||
HGB: 0.9915 | ||
PLT-I: 0.9991 | ||
Method Comparison (Whole Blood: %Bias) | Within predefined bias limits (e.g., ±2% or 0.2g/dL for HGB) | HGB: -1.41% (Note: One site showed -2.10% for HGB, slightly exceeding ±2% but deemed acceptable due to high r-value) |
WBC: 0.52% | ||
RBC: -0.83% | ||
Method Comparison (Body Fluid: r-value) | Acceptance criteria not explicitly stated, but high correlation values reported (e.g., >0.99 for WBC-BF, RBC-BF, TC-BF) | CSF WBC-BF: 0.9968 |
Peritoneal WBC-BF: 0.9989 | ||
Abnormal Flagging (Sensitivity/Specificity vs. Manual Microscopy) | No explicit numerical acceptance criteria given for these. | Any Distributional Abnormalities: Sensitivity 74.37%, Specificity 79.48%, OPA 76.31% |
Any Morphological Flag: Sensitivity 83.26%, Specificity 65.25%, OPA 70.77% | ||
Any Distributional and/or Morphological Abnormalities: Sensitivity 82.25%, Specificity 62.64%, OPA 75.38% | ||
Abnormal Flagging (PPA/NPA vs. Predicate XN-20) | No explicit numerical acceptance criteria given for these. | Any Distributional Abnormalities: PPA 94.74%, NPA 95.88%, OPA 95.20% |
Any Morphological Flag: PPA 92.29%, NPA 86.01%, OPA 89.10% | ||
Any Distributional and/or Morphological Abnormalities: PPA 96.37%, NPA 88.01%, OPA 93.73% |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size & Provenance:
- Precision (Repeatability - Whole Blood): Residual K2EDTA whole blood samples for 10 replicates for target values, and three samples for other parameters. This was across three US clinical sites (Site 01, 05, 24).
- Precision (Reproducibility - Whole Blood): XN CHECK whole blood control material, 90 results per control level (3 levels x 3 replicates x 2 runs x 5 days). Conducted at three US clinical sites.
- Precision (Body Fluid): Residual peritoneal, pleural, and synovial fluid samples (K2EDTA) and CSF (no anticoagulant) for 10 replicates for target values. Conducted at three US clinical sites.
- Linearity (Whole Blood & Body Fluid): Minimum of seven sample dilutions. Performed at one internal site.
- Analytical Specificity/Interferences: Whole blood K2EDTA samples from donors. Number of samples not specified, but collected for this study purpose.
- Sample Stability (Whole Blood): 8 unique leftover samples and 12 prospectively collected K2EDTA venous whole blood samples (20 samples total). Conducted at one internal site.
- Sample Stability (Body Fluid): 12 unique de-identified leftover body fluid samples (3-CSF, 3-peritoneal, 3-pleural, 3-synovial). Conducted at 1 external site.
- Detection Limit: Four blank samples and four low concentration samples for each parameter. Conducted across 2 XR-10 analyzers (implied internal or multi-site for the overall study context).
- Carry-Over: High and low target concentration samples (number not specified). Conducted at three US clinical sites.
- Comparison Studies (Whole Blood): 865 unique residual whole blood samples from pediatrics (
Ask a specific question about this device
Trade/Device Name: Sysmex XR-Series (XR-20) Automated Hematology Analyzer
Regulation Number: 21 CFR 864.5220
Regulation Description: Automated Differential Cell Counter
Regulation Section: 21 CFR 864.5220
The XR-Series module (XR-20) is a quantitative multi-parameter automated hematology analyzer intended for in vitro diagnostic use in screening patient populations found in clinical laboratories.
The XR-Series module classifies and enumerates the following parameters in whole blood: WBC, RBC, HGB, HCT, MCV, MCH, MCHC, PLT (PLT-I, PLT-F), NEUT%/#, LYMPH%/#, MONO%/#, EO%/#, BASO%/#, IG%/#, RDW-CV, RDW-SD, MPV, NRBC%/#, RET%/#, IPF, IPF#, IRF, RET-He and has a Body Fluid mode for body fluids. The Body Fluid mode enumerates the WBC-BF, RBC-BF, MN%/#, PMN%/#, and TC-BF# parameters in cerebrospinal fluid (CSF), serous fluids (peritoneal, pleural) and synovial fluids. Whole blood should be collected in K2EDTA or K3EDTA anticoagulant, and serous and synovial fluids in K2EDTA anticoagulant to prevent clotting of fluid. The use of anticoagulants with CSF specimens is neither required nor recommended.
The Sysmex XR-Series module (XR-20) is a quantitative multi-parameter hematology analyzer intended to perform tests on whole blood samples collected in K2 or K3EDTA and body fluids (pleural, peritoneal and synovial) collected in K2EDTA anticoagulant. The analyzers can also perform tests on CSF, which should not be collected in any anticoagulant. The XR-Series analyzer consist of four principal units: (1) One Main Unit (XR-20) which aspirates, dilutes, mixes, and analyzes blood and body fluid samples; (2) One Auto Sampler Unit which supply samples to the Main Unit automatically; (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system; (4) Pneumatic Unit which supplies pressure and vacuum from the Main Unit.
The XR-20 analyzer has an additional white progenitor cell (WPC) measuring channel and associated WPC reagents. The new WPC channel provides two separate flags for blasts and abnormal lymphocytes.
The provided FDA 510(k) Clearance Letter details the performance testing conducted for the Sysmex XR-Series (XR-20) Automated Hematology Analyzer to demonstrate its substantial equivalence to the predicate device, Sysmex XN-20. Due to the nature of the document being an FDA clearance letter summarizing performance studies rather than the full study reports, some requested details (e.g., exact sample provenance for all studies beyond "US clinical sites," specific qualifications for all experts, and the comprehensive list of acceptance criteria for all individual parameters in specific studies) are not exhaustively provided.
However, based on the information available, here's a breakdown of the acceptance criteria and the studies proving the device meets them:
Overall Acceptance Criteria & Study Design Philosophy:
The overarching acceptance criterion for this 510(k) submission is to demonstrate substantial equivalence to the predicate device, Sysmex XN-20 (K112605). This is primarily proven through:
- Analytical Performance Studies: Demonstrating that the XR-20 analyzer's performance (precision, linearity, analytical specificity, stability, limits of detection, carry-over) is "acceptable" or "met manufacturer's specifications/predefined acceptance criteria requirements."
- Method Comparison Studies: Showing a strong correlation and acceptable bias between the XR-20 and the predicate XN-20 for all claimed parameters across various patient populations and challenging samples.
- Clinical Sensitivity and Specificity Studies: For flagging capabilities, demonstrating acceptable agreement (sensitivity/specificity, PPA/NPA/OPA) with a reference method (manual microscopy) and the predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't provide a single, consolidated table of all acceptance criteria for every parameter across all tests. Instead, it states that results "met manufacturer's specifications or predefined acceptance criteria requirements" for analytical performance tests, and provides specific correlation coefficients, slopes, intercepts, and mean differences/percent differences for method comparison studies.
Here's a partial table based on the quantifiable data presented for Method Comparison Studies (Whole Blood - Combined Sites), which is a key performance indicator for substantial equivalence. The "Acceptance Criteria" are implied by what is generally considered acceptable in hematology analyzer comparisons for FDA submissions (high correlation, small bias), and explicitly stated for certain parameters like HGB.
Implicit Acceptance Criteria (General expectation for Method Comparison based on FDA context):
- Correlation Coefficient (r): Typically > 0.95 (ideally > 0.98 or 0.99 for robust parameters)
- Slope: Close to 1.0 (ideally between 0.95 and 1.05)
- Intercept: Close to 0
- Mean Difference / % Mean Difference / Estimated Bias: Within clinically acceptable limits (often derived from biological variation or regulatory guidelines). The document explicitly mentions a bias limit for HGB: ±2% or 0.2 g/dL.
Table 1: Partial Acceptance Criteria and Reported Device Performance (Method Comparison - Whole Blood)
Measurand | Acceptance Criteria for r (Implied/Explicit) | Reported r | Acceptance Criteria for Slope (Implied) | Reported Slope (95% CI) | Acceptance Criteria for Mean Diff. / % Diff. (Implied/Explicit) | Reported Mean Diff. / % Diff. | Key Conclusion based on Criteria |
---|---|---|---|---|---|---|---|
WBC | > 0.99 (high) | 0.9999 | ~1.0 | 1.003 (0.998, 1.007) | Close to 0 | 0.17 / 0.96% | Met |
RBC | > 0.99 (high) | 0.9944 | ~1.0 | 1.000 (0.993, 1.006) | Close to 0 | -0.01 / -0.34% | Met |
HGB | > 0.99 (high) | 0.9954 | ~1.0 | 0.993 (0.989, 0.998) | ±2% or 0.2 g/dL (explicit) | -0.1 / -0.79% (Note: One site had -2.10% / -0.3 g/dL bias, stated as acceptable due to high r) | Met (with explanation for one site's bias) |
HCT | > 0.99 (high) | 0.9946 | ~1.0 | 0.998 (0.993, 1.003) | Close to 0 | -0.2 / -0.40% | Met |
PLT-I | > 0.99 (high) | 0.9990 | ~1.0 | 1.005 (0.991, 1.020) | Close to 0 | -2 / -0.72% | Met |
PLT-F | > 0.99 (high) | 0.9990 | ~1.0 | 1.034 (1.019, 1.048) | Close to 0 | 6 / 1.83% | Met |
NRBC | > 0.99 (high) | 0.9996 | ~1.0 | 1.006 (0.996, 1.016) | Close to 0 | 0.00 / 0.61% | Met |
RET (%) | > 0.99 (high) | 0.9931 | ~1.0 | 1.033 (1.009, 1.057) | Close to 0 | 0.06 / 2.68% | Met |
IRF (%) | ~ 0.98 (high) | 0.9820 | ~1.0 | 0.998 (0.983, 1.012) | Close to 0 | -0.9 / -5.32% | Met |
IPF (%) | > 0.99 (high) | 0.9902 | ~1.0 | 0.999 (0.976, 1.023) | Close to 0 | -0.0 / -0.94% | Met |
RET-He (pg) | ~ 0.96 (high) | 0.9616 | ~0.93 (lower, but CI is tight) | 0.930 (0.906, 0.954) | Close to 0 | -1.2 / -3.85% | Met |
For other analytical performance studies (Precision, Linearity, Detection Limit, Carry-Over, Specificity, Stability), the document consistently states that the XR-20 "met manufacturer's specifications or predefined acceptance criteria requirements," supporting that specific numerical acceptance criteria were defined and achieved.
2. Sample Size and Data Provenance
-
Test Set:
- Whole Blood Method Comparison: A total of 865 unique residual whole blood samples.
- Body Fluid Method Comparison: A total of 397 residual body fluid samples.
- Provenance: All studies were conducted at three US clinical sites (for major studies like method comparison and reproducibility) or one internal site (for some linearity, stability, and matrix studies).
- Retrospective/Prospective: Samples are described as "residual" (implying retrospective, de-identified leftover samples) or "prospectively collected" where specified (e.g., for some stability studies).
-
Training Set: The document does not specify a training set for the algorithm, as this is a traditional in-vitro diagnostic (IVD) device (Automated Hematology Analyzer) which likely relies on fixed algorithms and established measurement principles (RF/DC Detection, Sheath Flow DC Detection, Flow Cytometry) rather than a machine learning model that requires explicit training data for its core functionality. The performance characterization is about its analytical capabilities, not about learning from a dataset to perform a task.
3. Number of Experts and Qualifications (for Ground Truth)
- Clinical Sensitivity and Specificity (Flagging Capabilities): The ground truth for flagging capabilities was established by "manual differential counts and peripheral blood smear review by experienced examiners using light microscopy (reference method) at each of the three external clinical sites." The exact number and specific qualifications (e.g., "radiologist with 10 years of experience") are not provided, but the term "experienced examiners" implies qualified personnel (e.g., clinical laboratory scientists, pathologists). Given this is a hematology analyzer, these would typically be clinical laboratory specialists or hematopathologists.
4. Adjudication Method (for the Test Set)
- Clinical Sensitivity and Specificity (Flagging): The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for resolving discrepancies between multiple manual reviews. It states "peripheral blood smear review by experienced examiners," primarily using manual microscopy as the reference method. This implies there might be a single expert review or an internal consensus process, but no detail on conflict resolution is provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC study was described. This type of study (MRMC) is typically performed for AI-assisted diagnostic tools where human reader performance is a direct outcome of interest and needs to be compared with and without AI assistance. The Sysmex XR-20 is an automated analyzer, a standalone device that performs measurements and classifications. While it outputs flags that may assist human review, its primary function isn't human-in-the-loop assistance in interpretation (like an AI for radiology image reading). Therefore, an MRMC study is not applicable for this device.
6. Standalone (Algorithm Only) Performance
- Yes, the primary performance studies are standalone algorithm/device performance. All analytical performance studies (Precision, Linearity, Detection Limit, Carry-Over, Analytical Specificity, Sample Stability) and the method comparison studies (comparing XR-20 results directly against the predicate XN-20) represent the standalone performance of the XR-20 analyzer. The device functions automatically without human input during the analysis process; therefore, human-in-the-loop performance is not directly evaluated as a primary outcome for its measurement capabilities.
7. Type of Ground Truth Used
- Analytical Ground Truth: For most analytical performance studies (Precision, Linearity, Detection Limits, Carry-Over), the "ground truth" is established by the performance of the predicate device (Sysmex XN-20) or by a well-controlled experimental setup (e.g., known dilutions for linearity, blank samples for LoB).
- Clinical Ground Truth (for flagging): For clinical sensitivity and specificity of flagging capabilities, the ground truth was expert consensus / manual review against peripheral blood smear using light microscopy. The document refers to this as the "reference method."
8. Sample Size for the Training Set
- As mentioned in point 2, no explicit training set for a machine learning algorithm is described. This device's core functionality relies on established physical and chemical principles and traditional signal processing for cell counting and classification, not a learnable AI model from a training data set in the typical sense.
9. How the Ground Truth for the Training Set Was Established
- Since there's no described "training set" for an AI/ML algorithm, this point is not applicable in the context of this traditional IVD device. The "ground truth" for verifying its performance (as detailed above) was established through comparisons to a legally marketed predicate device (XN-20) and a gold standard manual method (microscopy).
Ask a specific question about this device
(253 days)
Semen Quality Analyzer; LensHooke X3 PRO SE Semen Quality Analyzer
Regulation Number: 21 CFR 864.5220
----|----------------|-------------------|-------|
| POV; Semen Analysis Device | Class II | 21 CFR 864.5220
For Over-the-Counter Setting:
The LensHooke® X3 PRO SE Semen Quality Analyzer used with LensHooke® Semen Test Cassette is an optical device for human semen analysis which provides direct and calculated quantitative measurements for:
- Sperm concentration (M/mL)
- Total motility (PR+NP, %)
- Progressive motility (%)
- Non-Progressive motility (%)
- Immotility (%)
- Sperm morphology (normal forms, %)
- pH value
The LensHooke® X3 PRO SE Semen Quality Analyzer does not provide a comprehensive evaluation of a male's fertility status. It is a self-testing diagnostic system intended for human semen analysis of individuals at home to evaluate male fertility.
For Point-of-Care Professional Setting:
The LensHooke® X3 PRO Semen Quality Analyzer used with LensHooke® Semen Test Cassette is an optical device for human semen analysis which provides direct and calculated quantitative measurements for:
- Sperm concentration (M/mL)
- Total motility (PR+NP, %)
- Progressive motility (%)
- Non-Progressive motility (%)
- Immotility (%)
- Sperm morphology (normal forms, %)
- pH value
The LensHooke® X3 PRO Semen Quality Analyzer does not provide a comprehensive evaluation of a male's fertility status. It is an in-vitro diagnostic system intended for human semen analysis of individuals in healthcare professional setting to evaluate male fertility.
Semen Quality Analyzer integrates optical design and image analysis and combined with artificial intelligence image processing method, to fully automated analysis of semen quality including sperm pH, semen concentration, motility, and morphology. The images are captured and recorded by cameras and with image processing methods, the locations of sperms are detected. The sperm concentration is analyzed by the sperm unit density; the sperm motility is calculated by tracing sperm trajectories and the sperm morphology is calculated by comparing head and tail percentage. Through camera, the chromatographic image of pH is captured and with image saturation and brightness analysis, the level of pH is determined.
The provided FDA 510(k) clearance letter and summary for the LensHooke X3 PRO Semen Quality Analyzer offer a high-level overview of the device and the studies conducted. However, it does not provide explicit details on the acceptance criteria for each performance metric (e.g., specific thresholds for accuracy, sensitivity, specificity, or precision for motility, concentration, and morphology), nor does it present the reported device performance against these criteria in a quantitative manner. The summary largely discusses the types of studies performed (repeatability, reproducibility, linearity, etc.) rather than the results of those studies.
Based on the provided text, here's an attempt to structure the information, highlighting where details are not explicitly stated in the document:
Acceptance Criteria and Device Performance Study for LensHooke X3 PRO Semen Quality Analyzer
The LensHooke X3 PRO Semen Quality Analyzer was evaluated through non-clinical and user performance studies to demonstrate substantial equivalence to its predicate device, the LensHooke X1 PRO Semen Quality Analyzer. While the document mentions various types of evaluations, explicit quantitative acceptance criteria and detailed performance results are not provided for each parameter.
1. Table of Acceptance Criteria and Reported Device Performance
Note: The provided document states that "Verification and validation of test results were evaluated to establish the performance, functionality and reliability" and lists types of evaluations (repeatability, reproducibility, LoB/LoD/LoQ, linearity, interference, matrix comparison, sample volume, operating conditions, and stability). It also mentions a "User Performance Study." However, specific numerical acceptance criteria (e.g., minimum accuracy percentages, CVs for precision) and the achieved performance metrics are not detailed in this summary. The table below reflects the parameters measured but cannot fill in the acceptance criteria or reported performance based solely on this document.
Parameter Measured | Acceptance Criteria (Not Explicitly Stated in Document) | Reported Device Performance (Not Explicitly Stated in Document) |
---|---|---|
Sperm Concentration (M/mL) | e.g., Accuracy within X% of reference; CV |
Ask a specific question about this device
(270 days)
Re: K242388
Trade/Device Name: LensHooke X12 PRO Semen Analysis System
Regulation Number: 21 CFR 864.5220
----|----------------|-------------------|-------|
| POV; Semen Analysis Device | Class II | 21 CFR 864.5220
The LensHooke X12 PRO Semen Analysis System used with LensHooke Semen Test Slide is an optical device for human semen analysis which provides direct and calculated measurements for:
(1) Sperm concentration (M/mL)
(2) Total motility (PR+NP, %)
• Progressive Motility (PR, %)
(3) Sperm Morphology (normal forms, %)
The LensHooke X12 PRO Semen Analysis System used with LensHooke R10 Plus Sperm DNA Fragmentation Rapid Test Kit (SCD Assay) and LensHooke R11 Plus Sperm Double Strand DNA Fragmentation Rapid Test Kit (SDFR Assay) is an optical device for human semen analysis which provides direct measurement for:
(1) Sperm DNA Fragmentation Index (DFI, %)
The LensHooke X12 PRO Semen Analysis System does not provide a comprehensive evaluation of a male's fertility status. It is an in-vitro diagnostic system intended for human semen analysis of individuals in clinical laboratories to evaluate male fertility.
The i.MX 6ULL is a high performance, feature-rich and low-power processor.
The provided 510(k) summary for the LensHooke X12 PRO Semen Analysis System offers a high-level overview of the device and its claimed substantial equivalence to a predicate device. However, it lacks the detailed information typically required to fully describe the acceptance criteria and the study that proves the device meets those criteria.
Specifically, the document states: "Verification and validation of test results were evaluated to establish the performance, functionality and reliability of LensHooke X12 PRO Semen Analysis System. The evaluation included repeatability, reproducibility, LoB/LoD/LoQ, linearity, interference, sample volume, operating conditions, stability and matrix study." and "System Accuracy Study and User Performance study". These are general categories of tests, but the specific acceptance criteria and the results proving they were met are not explicitly provided.
Therefore, many parts of your request cannot be fully answered with the given text. I will fill in what can be inferred or is explicitly stated, and clearly mark what information is missing.
Device: LensHooke X12 PRO Semen Analysis System
The LensHooke X12 PRO Semen Analysis System is an optical device for human semen analysis that provides direct and calculated measurements for:
- Sperm concentration (M/mL)
- Total motility (PR+NP, %)
- Progressive Motility (PR, %)
- Sperm Morphology (normal forms, %)
- Sperm DNA Fragmentation Index (DFI, %) (when used with specific test kits)
1. Table of Acceptance Criteria and Reported Device Performance
MISSING INFORMATION: The 510(k) summary does not provide a specific table of acceptance criteria or the numerical results for each performance metric (e.g., specific accuracy thresholds, precision ranges). It states that "results of performance evaluation... demonstrate that the subject devices are substantial equivalence to the predicate device," implying that the performance met acceptable levels, but the actual targets and outcomes are not detailed.
To illustrate what would be in such a table if the information were available in the provided text:
Performance Metric | Acceptance Criteria (Example) | Reported Device Performance (Example based on typical expectations for such devices, not found in text) |
---|---|---|
Sperm Concentration | % bias within +/- X% of reference; R² > 0.Y vs. reference | (e.g., Bias 90%; Bias 85%; Bias 80% for normal forms) |
DNA Fragmentation Index | % bias within +/- D% of reference | (e.g., Bias 0.YY across specified range |
Interference | No significant interference from common substances | (e.g., Bilirubin, Triglycerides, Hemolysate at specified levels) |
Stability | Device/reagents stable under specified conditions | (e.g., Stable for X months/hours) |
Usability (User Perf. Study) | User accuracy/ease of use met pre-defined criteria | (e.g., High scores on participant questionnaire) |
Note: The metrics like repeatability, reproducibility, LoB/LoD/LoQ, linearity, interference, sample volume, operating conditions, stability, and matrix study are mentioned as having been evaluated, which would typically have associated acceptance criteria and reported performances, but these are not disclosed in the provided FDA letter.
2. Sample Size Used for the Test Set and Data Provenance
MISSING INFORMATION: The document states that a "System Accuracy Study" and "User Performance study" were performed for the non-clinical tests. However, the specific sample sizes (e.g., number of semen samples, number of users) used in these studies are not provided.
- Data Provenance: The document does not specify the country of origin for the data or whether the studies were retrospective or prospective. It only states the submitter is from Taichung, Taiwan, which might imply the studies were conducted there.
3. Number of Experts Used to Establish the Ground Truth and Qualifications
MISSING INFORMATION: The 510(k) summary does not specify the number of experts used to establish ground truth for the test set. It also does not specify the qualifications of these experts.
For the user performance study, it mentions "professional/English reading users, across educational backgrounds" but this refers to the users of the device being tested, not necessarily the experts establishing ground truth for the "System Accuracy Study."
4. Adjudication Method for the Test Set
MISSING INFORMATION: The document does not describe any adjudication method (e.g., 2+1, 3+1 consensus) used for establishing ground truth for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
MISSING INFORMATION: The 510(k) summary does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. This device is described as an "automated" and "optical device for human semen analysis," which implies a primary focus on automated measurement rather than AI assistance for human image interpretation in the same way an AI for radiology might function.
The "User Performance study" was described as demonstrating that "professional/English reading users... can easily understand and follow the labeling/user instructions to obtain accurate results while using Subject Device." This is a usability and user accuracy study, not an MRMC study comparing AI-assisted vs. unassisted human performance.
6. Standalone (Algorithm Only) Performance
Partially Addressed/Inferred: The device is described as an "optical device for human semen analysis which provides direct and calculated measurements" using "image analysis and combined with artificial intelligence image processing method." This implies that the device operates as a standalone algorithm to perform the semen analysis.
The "System Accuracy Study" would have been conducted to evaluate this standalone performance against a ground truth method (the predicate device "X1 RPO performed by lab personnel was used as a reference method" for the user performance study, and presumably for the system accuracy study as well, though it's not explicitly stated for the latter).
7. Type of Ground Truth Used
Inferred/Partially Addressed:
- For the "System Accuracy Study" and "User Performance Study," the predicate device, "X1 RPO performed by lab personnel was used as a reference method." The X1 RPO is also an automated semen analysis system. This means the ground truth was based on the measurements obtained from the predicate device (another automated system), presumably validated for accuracy.
- It is common in such evaluations that the predicate device's results are considered the "reference" or "ground truth" for comparison. However, true "ground truth" for semen analysis often involves manual microscopy by trained and experienced laboratory personnel following World Health Organization (WHO) guidelines, which is considered the gold standard for manual methods. The document does not explicitly state if the predicate device's performance itself was validated against such a manual gold standard, or if "lab personnel" performing X1 RPO analysis implies any manual verification for the study.
8. Sample Size for the Training Set
MISSING INFORMATION: The 510(k) summary does not provide any information about the sample size used for the training set for the AI/image processing algorithms.
9. How the Ground Truth for the Training Set Was Established
MISSING INFORMATION: The 510(k) summary does not provide any information on how the ground truth for the training set was established. Given the device uses "artificial intelligence image processing method," training data with established ground truth would be essential, but details are not disclosed in this summary.
Ask a specific question about this device
(214 days)
: K243114**
Trade/Device Name: SQA-iOw Sperm Quality Analyzer
Regulation Number: 21 CFR 864.5220
Analyzer
Common Name: SQA-iOw Sperm Quality Analyzer
Classification: Class II, POV
21 CFR 864.5220
The SQA-iOw Sperm Quality Analyzer is an automated analyzer intended for in-vitro diagnostic use to determine the following parameters in semen:
Measured parameters:
- Sperm Concentration/ Total Sperm Concentration, millions/mL
- Motile Sperm Concentration (MSC), millions/mL
- Progressively Motile Sperm Concentration (PMSC), millions/mL (combines Rapidly and Slowly Progressive Motile Sperm Concentration, millions/mL)
- Normal Forms (% Normal Morphology), %
Derived parameters:
- Total Motility / Total Motile (PR + NP), %
- Progressive Motility (PR), % (combines Rapidly and Slowly Progressive, %)
- Non-Progressive (NP), %
- Immotile (IM), %
The SQA-iOw is intended for CLIA Waived settings. The SQA-iOw does not provide a comprehensive evaluation of a male's fertility status and is intended for in vitro use only.
The SQA-iOw Sperm Quality Analyzer is a PC-based analytical medical device that tests human semen samples. The device works with a computer application that manages the device, and information related to the patient, the sample, the test results and the facility.
After collection and preparation, 0.6 mL of semen sample is aspirated into a disposable SQA capillary sample delivery system and inserted into the SQA-iOw measurement chamber. The testing process takes approximately 75 seconds. The system performs an automatic self-test and auto-calibration upon start up, and checks device stability before each sample is run.
The SQA-iOw Sperm Quality Analyzer utilizes proprietary software code to both perform analysis of semen parameters and present those results on the user interface. This software is installed on a PC as a cloud-based application ("app") and is designed to perform all functions and features of the SQA-iO device, controlled by the user through a proprietary graphical user interface (GUI).
The SQA-iOw Sperm Quality Analyzer software analyzes semen parameters using signal processing technology. Sample testing is performed by capturing electrical signals as sperm moves through a light source in the SQA-iO optical block. These light disturbances are converted into electrical signals which are then analyzed by the SQA-iOw software. The SQA-iOw software applies proprietary algorithms to interpret and express these electrical signals and report them as various semen parameters.
The SQA-iOw Sperm Quality Analyzer package provides the SQA-iOw device and USB cable. SQA disposable capillaries, cleaning kits and related testing supplies and test kits are supplied individually.
Here's a breakdown of the acceptance criteria and the study proving the SQA-iOw Sperm Quality Analyzer meets them, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance letter does not explicitly list predefined quantitative acceptance criteria in a dedicated table format. Instead, it describes two precision studies and a method comparison study, concluding that the results "met the acceptance criteria." For the method comparison, it refers to "Passing-Bablok regression" with "Slopes, y-intercepts, and correlation coefficients, along with the 95% confidence intervals, were reported." The implicit acceptance criteria are typically that these statistical measures fall within a pre-specified range demonstrating equivalence to the predicate device.
Given the information provided, we can infer the acceptance criteria for the parameters measured and the reported performance.
Parameter Category | Test Type | Acceptance Criteria (Implicit from conclusion) | Reported Device Performance (Summary) |
---|---|---|---|
Precision (Control Material) | Repeatability (Within-run), Between-day, Between-operator, Between-site, Total Imprecision | StDev and %CV met the acceptance criteria (specific values not provided in extract). | All reported SDs and %CVs for Controls Level 1, Level 2, and Negative Control were low, indicating high precision. For example, Total %CV for Control Level 1 was 1.84%, and for Level 2 was 4.01%. Total SD and %CV for Negative Control were 0.00%. |
Precision (Native Samples) | Repeatability (Within-run), Between-operator, Total Imprecision | StDev and %CV met the acceptance criteria for all reported parameters (specific values not provided in extract). | All reported SDs and %CVs for Sperm Concentration, MSC, PMSC, Morphology, Motility, Progressive Motility, Non-Progressive Motility, and Immotile were reported, with the conclusion that they "met the acceptance criteria." For instance, Total %CV for Sperm Concentration ranged from 1.5% to 14.1%, for MSC 0.0% to 41.6%, for PMSC 4.0% to 173.2% (with some very high %CVs for low-level samples), for Morphology 6.5% to 244.9% (with some very high %CVs for low-level samples), for Motility 4.2% to 11.0%, for Progressive Motility 6.1% to 261.7% (with some very high %CVs for low-level samples), for Non-Progressive Motility 6.4% to 76.7% (with some high %CVs for low-level samples), and for Immotile 1.8% to 10.4%. The conclusion states all met acceptance criteria, suggesting that higher %CV for low-level samples was considered acceptable within the context of clinical relevance for those low values. |
Method Comparison | Passing-Bablok Regression: Intercept, Slope, Correlation Coefficient | Slopes, y-intercepts, and correlation coefficients, along with the 95% confidence intervals, demonstrated clinical equivalence to the predicate device (specific ranges not provided in extract). | CONCENTRATION: Intercept 0.05 (-0.4799 to 0.2610), Slope 0.98 (0.9718 to 0.9836), Correlation 1.0 (0.9974 to 0.9982). |
MOTILITY: Intercept 2.1 (1.2174 to 3.0000), Slope 0.9 (0.9189 to 0.9565), Correlation 0.96 (0.9493 to 0.9659). | |||
PROGRESSIVE MOTILITY: Intercept -0.7 (-1.4516 to 0.0000), Slope 1.0 (0.9286 to 0.9677), Correlation 1.0 (0.9683 to 0.9787). | |||
NON-PROGRESSIVE MOTILITY: Intercept -0.3 (-1.0000 to 0.0000), Slope 1.3 (1.2500 to 1.4000), Correlation 0.7 (0.6944 to 0.7850). | |||
IMMOTILE: Intercept 4.0 (3.0417 to 5.0000), Slope 0.9 (0.9200 to 0.9583), Correlation 0.9 (0.9130 to 0.9411). | |||
MORPHOLOGY: Intercept -1.0 (-1.0000 to -0.0455), Slope 1.0 (0.9091 to 1.0000), Correlation 1.0 (0.9563 to 0.9706). | |||
MSC: Intercept 0.3 (0.05708 to 0.5580), Slope 0.9 (0.9344 to 0.9571), Correlation 1.0 (0.9889 to 0.9925). | |||
PMSC: Intercept -0.3 (-0.5450 to -0.0968), Slope 0.9 (0.9149 to 0.9364), Correlation 1.0 (0.9894 to 0.9929). |
2. Sample Size and Data Provenance
- Sample Size for Test Set:
- CLIA Waived User Precision Study (Control Material): 270 measurements in total (3 sites x 9 users (3 per site) over 3 days per site x 3 levels x 10 replicates of each level).
- CLIA Waived User Precision Study (Native Samples): 216 measurements total (9 native semen samples x 2 replicates per sample x 3 users/site x 4 time points).
- Method Comparison Study: 380 donor semen samples.
- Data Provenance (Country of Origin and Retrospective/Prospective):
- The Method Comparison Study was conducted across "Three U.S. sites."
- The Precision studies were also multi-site, with the control material study having "3 sites". The native sample precision study was "across two sites."
- The data appears to be prospectively collected for the purpose of these studies, as detailed study designs are provided, including number of sites, users, days, replicates, and samples. The samples used in the method comparison were "donor semen samples."
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts:
- For the Method Comparison Study, there were "One or more TRAINED OPERATORS per site" (3 sites) who generated reference SQA-V results.
- Qualifications of Experts:
- The experts (TRAINED OPERATORS) were described as "fully trained and considered appropriate for generating reference SQA-V results." Their specific professional qualifications (e.g., medical technologists, clinical lab scientists) or years of experience are not explicitly stated.
4. Adjudication Method for the Test Set
- The document implies that the ground truth for the method comparison study was established by the "TRAINED OPERATORS" using the predicate device (SQA-V). There is no mention of an adjudication process (e.g., 2+1, 3+1 consensus) among multiple experts to establish a "true" ground truth beyond the output of the predicate device operated by trained users. The samples were assayed "in singleton and in a blinded fashion" using both methods, suggesting a direct comparison rather than multi-reader adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No explicit MRMC comparative effectiveness study was described in terms of human readers improving with AI vs. without AI assistance. The study compares the performance of a new device (SQA-iOw operated by waived users) against a predicate device (SQA-V operated by trained users). It's a method comparison for an automated device, not an AI-assisted human reader study.
6. Standalone (Algorithm Only) Performance
- The SQA-iOw is described as an "automated analyzer" that "utilizes proprietary software code to both perform analysis of semen parameters" and "applies proprietary algorithms to interpret and express these electrical signals and report them as various semen parameters." The performance measurements detailed (precision studies and method comparison) represent the standalone performance of the device/algorithm in processing samples and generating results for the specified semen parameters. There is no human-in-the-loop component in the measurement process itself.
7. Type of Ground Truth Used
- The ground truth for the Method Comparison Study was established using the results from the predicate device (SQA-V) operated by trained users. This serves as a "reference standard" or "comparative method" rather than an absolute ground truth such as pathology or outcomes data.
- For the Precision Studies, the ground truth is statistical variability around the mean measurements of control materials and native samples.
8. Sample Size for the Training Set
- The document does not provide information on the sample size used for the training set for the SQA-iOw's algorithms. The studies described are validation (test set) studies, not algorithm development or training data descriptions.
9. How Ground Truth for Training Set was Established
- The document does not provide information on how the ground truth for the training set was established, as it focuses on the validation studies. It only mentions that the device "applies proprietary algorithms" but not how these algorithms were developed or trained.
Ask a specific question about this device
(126 days)
California 95054
Re: K243283
Trade/Device Name: Alinity h-series System Regulation Number: 21 CFR 864.5220
: Class II Regulation Description: Automated Differential Cell Counter Governing Regulation: 21 CFR 864.5220
The Alinity h-series System is an integrated hematology analyzer (Alinity hq) and slide maker stainer (Alinity hs) intended for screening patient populations found in clinical laboratories by qualified health care professionals. The Alinity h-series can be configured as:
· One stand-alone automated hematology analyzer system.
· A multimodule system that includes at least one Alinity hg analyzer module and may include one Alinity hs slide maker stainer module.
The Alinity hq analyzer module provides complete blood count and a 6-part white blood cell differential for normal and abnormal cells in capillary and venous whole blood collected in K2EDTA. The Alinity hq analyzer provides quantitative results for the following measurands: WBC, NEU, %N, LYM, %M, EOS, %E, BASO, %B, IG, %IG, RBC, HCT, HGB, MCV, MCH, MCHC, MCHr, RDW, NRBC, NR/W, RETIC, %R, IRF, PLT, MPV, %rP. The Almity hq analyzer module is indicated to identify patients with hematologic parameters within and outside of established reference ranges. The Alinity hs slide maker stainer module automates whole blood film preparation and staining and stains externally prepared whole blood smears.
For in vitro diagnostic use.
The Alinity h-series System is a multimodule system that consists of different combinations of one or more of the following modules: a quantitative multi-parameter automated hematology analyzer (Alinity hg) and an automated slide maker stainer (Alinity hs).
The Alinity hq is a quantitative, multi-parameter, automated hematology analyzer designed for in vitro diagnostic use in counting and characterizing blood cells using a multi-angle polarized scattered separation (MAPSS) method to detect and count red blood cells (RBC), nucleated red blood cells (NRBC), platelets (PLT), and white blood cells (WBC), and to perform WBC differentials (DIFF) in whole blood.
There is also an option to choose whether to detect reticulocytes (RETIC) at the same time. The options of the selections are:
- CBC+DIFF: Complete blood count with differential
- CBC+DIFF+RETIC: Complete blood count with differential and reticulocyte ●
The Alinity h-series of instruments has a scalable design to provide full integration of multiple automated hematology analyzers that can include the integration of an automated blood film preparation and staining module, all of which are controlled by one user interface. The modules are designed to fit together. Each module has an internal conveyor that enables racks of specimen tubes to be transported between modules. The system can move racks between modules to perform different tests on a given specimen (e.g., make slide smears on the Alinity hs).
An Alinity h-series system can be configured as follows:
- Configuration 1: 1 (Alinity hq) + 0 (Alinity hs) = 1+0
- . Configuration 2: 1 (Alinity hq) + 1 (Alinity hs) = 1+1
- . Configuration 3: 2 (Alinity hq) + 0 (Alinity hs) = 2+0
- . Configuration 4: 2 (Alinity hq) + 1 (Alinity hs) = 2+1
Here's a breakdown of the acceptance criteria and study information for the Alinity h-series System, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state all acceptance criteria in a dedicated table format with clear "criteria" vs. "performance" columns for every test. However, it does mention that all results "met the predefined acceptance criteria" for various tests. The tables provided show the device's performance, and the accompanying text confirms it met criteria.
Here's a consolidated view of relevant performance data presented, with implicit acceptance criteria being that the results are within acceptable ranges for clinical diagnostic instruments, or that they demonstrate improvement as intended by the software modification.
Test Category | Measurand | Reported Device Performance (Subject Device SW 5.8) | Acceptance Criteria (Implicit, based on "met predefined acceptance criteria" statements) |
---|---|---|---|
Precision (Normal Samples) | BASO ($\times 10^3/\mu L$) | CBC+Diff: 0.021 SD (Range 0.01 to 0.12); CBC+Diff+Retic: 0.025 SD (Range 0.01 to 0.11) | SD/%CV point estimates to be within predefined limits. (Explicitly stated: "All samples were evaluated against all applicable acceptance criteria and met all acceptance criteria.") |
%BASO (%) | CBC+Diff: 0.352 SD, 41.04 %CV (Range 0.13 to 2.20); CBC+Diff+Retic: 0.455 SD, 41.08 %CV (Range 0.13 to 2.00) | ||
LYM ($\times 10^3/\mu L$) | CBC+Diff: 0.068 SD (Range 1.10 to 2.01), 3.09 %CV (Range 1.94 to 3.05); CBC+Diff+Retic: 0.063 SD (Range 1.10 to 2.01), 3.17 %CV (Range 1.91 to 3.07) | ||
%LYM (%) | CBC+Diff: 1.239 SD, 3.34 %CV (Range 13.80 to 57.80); CBC+Diff+Retic: 1.193 SD, 3.63 %CV (Range 13.40 to 58.10) | ||
WBC ($\times 10^3/\mu L$) | CBC+Diff: 0.068 SD (Range 3.72 to 4.06), 2.71 %CV (Range 3.92 to 10.60); CBC+Diff+Retic: 0.085 SD (Range 3.72 to 4.04), 2.22 %CV (Range 3.93 to 10.40) | ||
Precision (Pathological Samples) | WBC ($\times 10^3/\mu L$) | Low: 0.083 SD (Range 0.06 to 2.01); High: 1.88 %CV (Range 41.40 to 209.00) | SD or %CV point estimates to be within predefined limits. (Explicitly stated: "All results met the predefined acceptance criteria, demonstrating acceptable short-term precision...") |
BASO ($\times 10^3/\mu L$) | Low WBC Related: 0.010 SD (Range 0.00 to 0.04) | ||
LYM ($\times 10^3/\mu L$) | Low WBC Related: 0.040 SD (Range 0.12 to 0.74) | ||
Linearity | WBC | Overall Linearity Range: (0.00 to 448.58) $\times 10^3/\mu L$ | All results met the predefined acceptance criteria and were determined to be acceptable. |
Method Comparison (vs. Sysmex XN-10) | BASO ($\times 10^3/\mu L$) | r: 0.26 (0.22, 0.30); Slope: 1.25 (1.20, 1.30); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1812) | Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..." |
%BASO (%) | r: 0.44 (0.40, 0.48); Slope: 1.44 (1.39, 1.50); Intercept: -0.12 (-0.14, -0.09). (Sample Range 0.00 - 8.37, N=1812) | ||
LYM ($\times 10^3/\mu L$) | r: 0.99 (0.99, 0.99); Slope: 0.99 (0.99, 1.00); Intercept: 0.02 (0.01, 0.02). (Sample Range 0.05 - 8.34, N=1598) | ||
%LYM (%) | r: 1.00 (1.00, 1.00); Slope: 1.00 (0.99, 1.00); Intercept: 0.04 (0.04, 0.15). (Sample Range 0.34 - 84.60, N=1598) | ||
WBC ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1958) | ||
Method Comparison (vs. Predicate Device SW 5.0) | BASO ($\times 10^3/\mu L$) | r: 0.75 (0.73, 0.77); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 2.41, N=1801) | Bias at medical decision points evaluated and within predefined acceptance criteria. "All results were within the predefined acceptance criteria and found to be acceptable..." |
%BASO (%) | r: 0.92 (0.91, 0.92); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.00 - 8.37, N=1801) | ||
LYM ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.05 - 8.34, N=1589) | ||
%LYM (%) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.34 - 84.60, N=1589) | ||
WBC ($\times 10^3/\mu L$) | r: 1.00 (1.00, 1.00); Slope: 1.00 (1.00, 1.00); Intercept: 0.00 (0.00, 0.00). (Sample Range 0.07 - 436.00, N=1948) | ||
Clinical Sensitivity/Specificity | Any Morphological Flags | Sensitivity: 67.57% (58.03%, 76.15%); Specificity: 77.55% (73.79%, 81.01%); Efficiency: 75.85% (72.37%, 79.09%). (N=650) | Met predefined "acceptance criteria" (not explicitly given numerical targets, but stated as met). |
Any Distributional Abnormalities | Sensitivity: 83.02% (77.95%, 87.34%); Specificity: 80.59% (76.20%, 84.49%); Efficiency: 81.60% (78.37%, 84.54%). (N=636) | ||
Any Morphological and/or Distributional Abnormalities | Sensitivity: 80.98% (76.12%, 85.23%); Specificity: 76.09% (71.22%, 80.51%); Efficiency: 78.40% (75.02%, 81.51%). (N=648) | ||
Reference Range Verification | All measurands | Upper bound of 95% CI for percentage of replayed results within predicate reference ranges was $\ge$ 95%. | Upper bound of the two-sided 95% CI for the percentage of replayed results that were within the reference ranges of the predicate device was $\ge$ 95%. (Explicitly stated as met). |
Specific Improvement for Affected BASO Samples | BASO ($\times 10^3/\mu L$) | Subject Device: r: 0.84 (0.75, 0.90); Slope: 1.17 (1.00, 1.32); Intercept: 0.00 (-0.01, 0.01). (Range 0.00 - 1.69, N=67). Predicate Device: r: 0.93 (0.90, 0.96); Slope: 2.22 (1.64, 2.80); Intercept: -0.01 (-0.05, 0.02). (Range 0.03 - 8.11, N=67) Demonstrates reduction of falsely increased basophils. | Results for potentially impacted measurands (BASO and %BASO) must demonstrate reduction of falsely increased basophil measurements. "Additionally, the results demonstrate a reduction in the number of false positive sample %BASO classifications..." |
%BASO (%) | Subject Device: r: 0.61 (0.44, 0.75); Slope: 1.22 (0.98, 1.52); Intercept: -0.08 (-0.39, 0.19). (Range 0.00 - 4.33, N=67). Predicate Device: r: 0.33 (0.10, 0.53); Slope: 0.54 (0.31, 0.83); Intercept: 1.83 (1.45, 2.05). (Range 2.00 - 4.49, N=67). Demonstrates reduction of falsely increased basophils. |
2. Sample Size Used for the Test Set and Data Provenance
The "test set" for this submission largely refers to re-analyzing raw data from the predicate device's (K220031) submission using the new algorithm.
- For Precision Studies (Normal Samples):
- Sample Size: 20 unique healthy donors for CBC+Diff, 19 for CBC+Diff+Retic.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they are described as "healthy."
- For Precision Studies (Pathological Samples and Medical Decision Levels):
- Sample Size: Minimum of 16 donors per measurand and range, with a minimum of 4 repeatability samples per measurand and range (2 for CBC+Diff, 2 for CBC+Diff+Retic).
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but they include "abnormal whole blood samples."
- For Linearity:
- Sample Size: RBC, HGB, NRBC used whole blood samples; WBC, PLT, RETIC used commercially available linearity kits. A minimum of 9 levels were prepared for each measurand.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Whole blood samples and commercial kits.
- For Method Comparison Study (Subject Device vs. Predicate Device K220031 and Sysmex XN-10):
- Sample Size: 2,194 unique venous and/or capillary specimens. 1,528 specimens from subjects with medical conditions, 244 without.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). The origin of the donors (country) is not specified, but collected across 7 clinical sites, representing a "wide variety of disease states (clinical conditions)" and "wide range of demographics (age and sex)."
- Specific "affected samples" for basophil analysis: 67 samples.
- For Specimen Stability Studies:
- K2EDTA Venous & Capillary Whole Blood: 14 unique native venous, 30 unique native capillary.
- Controlled Room Temp K2EDTA Venous & Capillary Whole Blood: 10 K2EDTA venous from healthy donors, 10 abnormal de-identified leftover K2EDTA venous, 20 normal K2EDTA capillary.
- K3EDTA Venous Whole Blood: 14 unique native venous.
- K3EDTA Capillary Whole Blood: 94 unique native capillary.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Samples from "apparently healthy donors" and "abnormal de-identified leftover" samples.
- For Detection Limit:
- Sample Size: 2 unique samples per day over a minimum of 3 days.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed).
- For Clinical Sensitivity/Specificity:
- Sample Size: A subset of 674 venous and capillary whole blood specimens from the method comparison study.
- Data Provenance: Retrospective (raw data files from K220031 submission were replayed). Collected from 6 clinical sites.
- For Reference Range Verification:
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Clinical Sensitivity/Specificity Study:
- Number of Experts: Two independent 200-cell microscopic reviews were performed. So, at least two experts per sample.
- Qualifications: Not explicitly stated beyond "microscopic reviews of a blood smear (reference method)." It can be inferred these would be qualified laboratory professionals (e.g., medical technologists, clinical laboratory scientists) with expertise in manual differential counting, but specific years of experience or board certifications are not provided.
- Ground Truth: The "final WBC differential and WBC, RBC, and PLT morphology results were based on the 400-cell WBC differential counts derived from the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results..."
4. Adjudication Method for the Test Set
- Clinical Sensitivity/Specificity Study: The ground truth was based on "the average of 2 concurring 200-cell differential counts and concordant RBC and PLT morphology results." This indicates an agreement-based adjudication method, likely a 2-of-2 consensus. If the two initial reviews did not concur, a third review/adjudication might have been employed (e.g., 2+1), but this is not explicitly stated. The phrasing "concurring 200-cell differential counts" strongly suggests initial agreement was required.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI assistance.
- This submission describes an automated differential cell counter (Alinity h-series System) which is a standalone device for providing results without human assistance in the interpretation of the primary measurements (though human review of flags/smears may occur downstream).
- The study primarily focuses on comparing the output of the device with its new algorithm (SW 5.8) to its previous version (SW 5.0) and to a predicate device (Sysmex XN-10). The clinical sensitivity/specificity study compares the device's algorithmic flags/differentials to microscopic analysis (human experts), but this is not an assistance study but rather a standalone performance evaluation against a gold standard.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance evaluation was primarily done. The core of this submission is about a software algorithm modification within an automated analyzer.
- All analytical performance studies (precision, linearity, detection limits, stability) and method comparison studies (against the predicate and Sysmex XN-10) evaluate the Alinity hq (with the new algorithm) as a standalone instrument.
- The clinical sensitivity and specificity study also evaluates the Alinity hq's ability to identify abnormalities and morphological flags independently, comparing its output directly to expert microscopic review (ground truth). There's no mention of a human-in-the-loop scenario where humans are presented with AI results to improve their performance.
7. The Type of Ground Truth Used
- Expert Consensus (Microscopy): For the clinical sensitivity/specificity study, the ground truth for WBC differentials and morphological flags was established by manual microscopic review (400-cell differential) by two independent experts, with results based on their concurrence. This is a form of expert consensus.
- Reference Methods/Device: For analytical performance and method comparison studies, the ground truth was established by:
- The Alinity hq with its previous software version (K220031) (for direct comparison to the subject device's new software).
- Another legally marketed predicate device (Sysmex XN-Series (XN-10) Automated Hematology Analyzer K112605).
- Known concentrations/values in control materials or linearity kits.
- Clinically accepted laboratory practices and norms (e.g., for precision, stability).
8. The Sample Size for the Training Set
- The document does not provide information on the sample size used for the training set for the algorithm modification. Since this submission describes a modification to an existing algorithm ("modified logic for counting basophils"), it's possible the training was done prior to the original K220031 submission, or that new data was used for a specific refinement without explicit detail in this summary. The focus of this 510(k) is the evaluation of the modified algorithm on existing data and its performance compared to predicates, not the development process of the algorithm itself.
9. How the Ground Truth for the Training Set Was Established
- As the training set details are not provided, the method for establishing its ground truth is also not elaborated upon in this 510(k) summary. Given the nature of a hematology analyzer, it would typically involve meticulously characterized blood samples, often with manual microscopic differentials, flow cytometry reference methods, or other gold standards, aligned with clinical laboratory guidelines.
Ask a specific question about this device
(360 days)
Irvine, California 92618
Re: K240402
Trade/Device Name: Cito CBC System Regulation Number: 21 CFR 864.5220
Regulation Number
5
864.5220
5.
The Cito CBC system is a quantitative automated hematology analyzer intended for in-vitro diagnostic use to determine the following parameters with whole blood anticoagulated with K2EDTA (Venous):
-
CBC parameters: white blood cell count (WBC), red blood cell count (RBC), platelet count (PLT), hemoglobin concentration (HGB), hematocrit (HCT), and mean corpuscular volume (MCV);
-
5-Part WBC Differential (WBC Diff): neutrophil count and percentage (NEUT and NEUT%), lymphocyte count and percentage (LYMPH and LYMPH%), monocyte count and percentage (MONO and MONO%), eosinophil count and percentage (EO and EO%), basophil count and percentage (BASO and BASO%).
It is not indicated for use in diagnosing or monitoring of oncology patients, critically ill patients, or children under the age of 2 years.
The Cito CBC system is an automated hematology analyzer that is intended to analyze human whole blood, and report results of 16 hematology parameters. The system consists of a tabletop analyzer, the Cito CBC Analyzer, and a disposable test cartridge, the Cito CBC Cartridge. The analyzer includes software with Graphic User Interface to guide users to complete test procedure. To perform a test, the test cartridges and other test consumables are provided as a test kit. The current version of the test cartridge is designed for testing whole blood anticoagulated with K2EDTA. The current version of the test kit is designed for testing venous whole blood collected in a vacutainer tube.
The Cito CBC analyzer utilizes the principle of fluorescent flow cytometry for cell count and cell classification. A laser is used as the light source, and fluorescence and light scattering signals are detected for the measurements. For fluorescent labeling, blood cells are treated with a fluorescent dye that has high affinity binding to nucleic acid. Additionally, the analyzer utilizes the principle of two-wavelength photometry for the measurement of hemoglobin. Blood cells are lysed to release hemoglobin. Meanwhile, the light scattering signal measured from the flow cytometry and the light absorption signals measured from the photometry are both used to quantify the hematocrit and the mean corpuscular volume.
The Cito CBC cartridge is self-contained with all reagents to perform a test. The blood sample is applied to the test cartridge, and the test cartridge is inserted into the analyzer to complete the test. The analyzer has a pneumatic module which provides pressure to the cartridge and drives the applied sample to mix with reagents to form multiple sample mixtures. The cartridge has transparent flow cells to support the measurements of sample mixtures by fluorescent flow cytometry and photometry. The sample mixtures and the measurement wastes are all self-contained inside the cartridge for safe disposal after the test.
The CytoChip Inc. Cito CBC System (K240402) is an automated hematology analyzer intended for in-vitro diagnostic use to determine various complete blood count (CBC) and 5-part white blood cell (WBC) differential parameters. The device is not indicated for use in diagnosing oncology patients, critically ill patients, or children under the age of 2 years.
Here's a breakdown of the acceptance criteria and the studies conducted to prove the device meets these criteria:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document details various studies with stated acceptance criteria, although the specific numerical targets for each parameter's acceptance criteria are not explicitly presented in a consolidated table format. Instead, the document generally states that "met the acceptance criteria" for each study. However, the outcomes demonstrating successful adherence to these criteria are reported.
Here's a summary derived from the document:
Study Type | Acceptance Criteria (General) | Reported Device Performance (Outcome) |
---|---|---|
Precision-Repeatability | Pooled StDev and %CV within specified limits across low, normal, and high intervals. | Met the acceptance criteria for all reported parameters in the three intervals and at the medical decision levels. |
Reproducibility (QC Material) | Total %CV/StDev within specified limits for each control level. | Total %CV/StDev of reported parameters for each of the three levels all met the acceptance criteria. |
Linearity | Percentage bias thresholds defined for each parameter for linear range. | Established the linearity intervals of the reported parameters; implicitly met criteria based on "established." |
Detection Limit (LoQ) | LoQ determined individually for cartridge lots. | LoQ for each parameter was determined; implicitly met criteria based on "determined." |
Assay Measuring Range | Defined based on linearity and LoQ. | Established by combining linearity and LoQ results, implying criteria met. |
Interference | No significant interference up to specified concentrations. | Demonstrated no significant interference for all specified substances and parameters, up to the tested concentrations. (e.g., conjugated bilirubin up to 40 mg/dL, intralipid up to 500 mg/dL without suppression, etc.) |
Metrological Traceability | Device traceable to internationally recognized reference methods via predicate. | Traceability established using predicate device (Sysmex XN) as reference, which is traceable to international reference methods for WBC, RBC, HGB, HCT, PLT. |
Specimen Stability | Mean percentage difference within acceptable limits against baseline. | Results of all tested time points met the acceptance criteria, supporting an 8-hour blood sample stability. |
Cartridge Lot-to-Lot Variability | Within-lot, between-lot, and total variability metrics within limits. | All within-lot, between-lot, and total %CV/StDev met the acceptance criteria. |
QC Material – Lot-to-Lot Reproducibility | Pooled total CV/StDev for each QC parameter and level within limits. | The pooled total CV/StDev of the three lots met the acceptance criteria. |
Method Comparison | Slope, intercept, and correlation coefficient from Deming fittings, and mean bias acceptable. | Results fully met the acceptance criteria for all reported parameters. |
Flagging Comparison | Positive Percent Agreement, Negative Percent Agreement and Overall Agreement within acceptable range. | Positive Agreement: 91.9%, Negative Agreement: 95.3%, Overall Agreement: 93.7%; fully met acceptance criteria. |
Reference Interval | Established/verified against literature values. | Reference intervals established for adults and verified for adolescents and children. |
2. Sample Sizes and Data Provenance
-
Test Set (Clinical Study Summary):
- Method Comparison: 481 venous whole blood samples.
- Flagging Comparison: 1,082 venous whole blood samples.
- Reference Interval: 253 adults, 46 adolescents, and 41 children.
- Data Provenance: The studies were conducted at CLIA-waived sites. The document does not specify the country of origin for the data but implies clinical settings within the US (where CLIA-waived labs would operate). The studies appear to be prospective to gather data specifically for device evaluation.
-
Training Set: The document does not explicitly mention a "training set" or "validation set" in the context of an AI/algorithm. The device is described as an "Automated Differential Cell Counter" utilizing fluorescent flow cytometry. This suggests a traditional IVD device, not necessarily one relying on an AI model trained on a large dataset in the way a deep learning algorithm would be. Therefore, information regarding training set size and ground truth establishment for a training set is not provided because it's likely not applicable to the device's development paradigm as described.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not applicable or specified for ground truth establishment in the context of expert consensus reading on images. The device is an automated hematology analyzer. Ground truth is established by comparative methods and reference standards.
- Qualifications of Experts: The clinical studies were performed at "CLIA-waived testing sites" by "untrained operators" (for precision/reproducibility) and operators (for method comparison/flagging). The predicate device (Sysmex XN-series) served as the comparative method. The Metrological Traceability Study refers to established international reference methods and expert panels (e.g., ICSH Expert Panel on Cytometry, ICSH Expert Panel on Haemoglobinometry), which are the ultimate "experts" for the fundamental measurements.
4. Adjudication Method for the Test Set
The studies described are primarily analytical and clinical performance studies for an automated IVD device comparing it to a predicate device and reference methods, rather than reader studies for image-based diagnostic aids. Therefore, a multi-reader adjudication method (e.g., 2+1, 3+1) is not applicable or mentioned in this context. The "ground truth" is based on the performance of the predicate device/reference methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This type of study is typically performed for AI-assisted diagnostic imaging devices to assess how AI affects human reader performance. The Cito CBC System is an automated hematology analyzer, not an AI-based imaging diagnostic aid that assists human readers.
6. Standalone Algorithm Performance
This section is not directly applicable. The Cito CBC System is an integrated automated hematology analyzer, not a standalone algorithm. Its performance is measured as the output of the system (analyzer + cartridge). The performance studies evaluate the system's ability to accurately measure hematology parameters compared to reference methods. The "algorithm" (the instrument's internal measurement principles like flow cytometry and photometry) is intrinsically part of the device and its performance is demonstrated through the clinical and non-clinical studies.
7. Type of Ground Truth Used
The ground truth for evaluating the Cito CBC System's performance was established using:
- Comparative Method: The Sysmex XN Automated Hematology Analyzer (K112605), a legally marketed predicate device, was used as the comparative method in both the Method Comparison and Flagging Comparison studies.
- Reference Methods/Standards: For metrological traceability, the predicate device itself establishes traceability to internationally recognized reference methods (e.g., ICSH expert panel methods for WBC, RBC, HGB, HCT, PLT). These international standards serve as the ultimate ground truth for the measured parameters.
- Clinical Conditions/Abnormalities: For flagging comparison, samples covering a wide variety of abnormalities (distribution, morphological, other abnormal conditions) were used. The ground truth for these abnormalities would implicitly come from clinical diagnosis and the predicate device's flagging capabilities.
- Healthy Subjects: For the Reference Interval study, healthy subjects were enrolled, and reference intervals were established non-parametrically or verified against published literature.
8. Sample Size for the Training Set
As stated in point 2, the document does not describe the device as an AI/ML algorithm requiring a distinct "training set." The device is an automated instrument, and its development would typically involve engineering design, calibration, and validation rather than statistical "training" on a data set. Therefore, this information is not provided.
9. How the Ground Truth for the Training Set Was Established
As stated in points 2 and 8, the concept of a "training set" with established ground truth, as typically understood for AI/ML models, is not applicable to the description of the Cito CBC System provided
in the document.
Ask a specific question about this device
(176 days)
Caesarea, 3088900 Israel
Re: K241628
Trade/Device Name: YO Home Sperm Test Regulation Number: 21 CFR 864.5220
ClassificationTrade Name:
YO Home Sperm Test
Common Name: YO Home Sperm Test
Classification: Class II, POV 21 CFR 864.5220
The YO Home Sperm Test (YO 3.0) is a smartphone-based test for semen analysis performed by lay users.
The parameters reported by the YO Home Sperm Test (YO 3.0) are:
-
Total Sperm Concentration / Sperm Concentration, M/mL
-
Total Motile / Motility (PR + Non Progressive [NP]), %
-
Progressive Motility (PR), % (combines Rapidly and Slowly Progressive, %)
-
Motile Sperm Concentration (MSC), M/mL
-
Progressively Motile Sperm Concentration (PMSC), M/mL (combines Rapidly and Slowly Motile Sperm Concentration, M/mL)
The YO Home Sperm Test (YO 3.0) does not provide a comprehensive evaluation of a male's fertility status and is intended for in vitro, over the counter only.
The YO Home Sperm Test (YO 3.0) is a smartphone-based test for semen analysis performed by lay users.
The parameters reported by the YO Home Sperm Test (YO 3.0) are:
-
- Total Sperm Concentration / Sperm Concentration, M/mL
-
- Total Motile / Motility (PR + NP), %
-
- Progressive Motility (PR), % (combines Rapidly and Slowly Progressive, %)
-
- Motile Sperm Concentration (MSC), M/mL
-
- Proqressively Motile Sperm Concentration (PMSC), M/mL (combines Rapidly and Slowly Motile Sperm Concentration, M/mL)
The YO Home Sperm Test (YO 3.0) utilizes proprietary algorithms to both conduct semen analysis, and present and store the results and videos on the user's smartphone and in the YO application ("app") is downloaded onto the user's own smartphone (iPhone/Android) and is controlled by the user through a proprietary graphical interface (GUI). The GUI quides the user through the process step by step on the App's screen and operates with the YO device.
The YO kit provides the supplies necessary to test up to six semen collection cups, pipettes for sample aspiration, fixed coverslip slides, liquefaction powder and a YO device that connects via WiFi to a smartphone and houses the YO slide. The YO software app guides the user through the sample preparation and testing process step-by-step with mandatory confirmation by the user of each completed step. The app also operates the YO device's camera and processor to provide a semen video.
The plastic YO device contains a fixed coverslip slide insertion channel, magnification lens, lens holder, WiFi camera and an LED that lights up the optical path. The YO software captures a video in HD (high definition) mode and implements a unique software algorithm to identify sperm and analyze the light fluctuations resulting from sperm movement to report semen values. The algorithm recognizes when the YO autofocus function has the best image and then defines the optimal area of the video for analysis.
When YO reports any semen value below the cut-off for normal, YO recommends performing an additional test with a new sample and to seek medical advice. YO cut-offs are based on WHO 6th ed. reference values for semen parameters, statistical modeling, and expert publications. The user is not required to perform any interpretation of the test results and YO does not review, verify, or interpret the video provided to the operator. The user can only observe and archive his test results and sperm video. YO does not provide a comprehensive evaluation of a male's fertility status and is intended for over-thecounter ), for in vitro use only.
The YO software quides the user through the testing process step by step on the smartphone's screen and operates in conjunction with the: YO device, smartphone's built-in camera, flash, and man-machine interface to report and store the results of 5 sperm parameters and a video of the user's semen samples. After analyzing the operator's semen video, the YO software reports both the quantitative results and an explanation about the 5 Semen parameters which are visually presented in the YO app directly following testing. In addition, the operator's sperm video is also presented in the test results section directly following the testing phase of the app.
Here's a summary of the acceptance criteria and the study proving the device meets those criteria, based on the provided text:
Device: YO Home Sperm Test (YO 3.0)
1. Table of Acceptance Criteria and Reported Device Performance:
Parameter | Acceptance Criteria (from analytical studies) | Reported Device Performance (from clinical study vs. SQA-V) |
---|---|---|
Analytical Performance | ||
Within-run Repeatability (%CV) | 0.9 | R > 0.9 (claim met) |
Linearity (Slope) | 1.0 +/- 0.2 | Slope > 1.0 +/- 0.2 (claim met, from text) |
Interference (Percent difference) | Within 15% of controls | Within 15% of controls |
Clinical Performance (vs. SQA-V) | (Implied good correlation and user comprehension) | |
Sperm Concentration (M/mL) | Intercept: 2.29 (95% CI: 1.29 to 3.25) | |
Slope: 0.86 (95% CI: 0.82 to 0.91) | ||
Correlation (r): 0.93 (95% CI: 0.92 to 0.95) | ||
Motility, % | Intercept: 0.00 (95% CI: 0.00 to 3.00) | |
Slope: 1.05 (95% CI: 1.00 to 1.11) | ||
Correlation (r): 0.90 (95% CI: 0.88 to 0.92) | ||
Progressive Motility, % | Intercept: -0.47 (95% CI: -2.78 to 0.00) | |
Slope: 1.24 (95% CI: 1.16 to 1.31) | ||
Correlation (r): 0.88 (95% CI: 0.85 to 0.90) | ||
Motile Sperm Concentration (M/mL) | Intercept: 1.84 (95% CI: 1.50 to 2.20) | |
Slope: 0.92 (95% CI: 0.88 to 0.95) | ||
Correlation (r): 0.94 (95% CI: 0.93 to 0.95) | ||
**Progressively Motile Sperm ** | Intercept: -0.04 (95% CI: -0.44 to 0.00) | |
Concentration (M/mL) | Slope: 1.03 (95% CI: 0.98 to 1.07) | |
Correlation (r): 0.94 (95% CI: 0.92 to 0.95) | ||
User Comprehension (Questionnaire) | High percentage of correct answers (implied) | 87% - 99% correct responses across various questions |
2. Sample size used for the test set and the data provenance:
-
Clinical Study (Method Comparison):
- Sample Size: 309 comparative data sets overall. A minimum of 100 semen samples per site (across 3 US sites).
- Data Provenance: Prospective. Conducted at three US sites, with lay users recruited to analyze their own samples or female users testing donor samples.
-
Analytical Studies (Precision, LoD/LoQ, Linearity, Interference):
- Sample Size for Precision (user repeatability): Approximately 20 users per site (3 sites), testing samples in triplicate.
- Sample Size for Precision (professional user reproducibility): 15-30 native semen samples per site (3 sites), representing 3 levels, 2 reps per sample, 4 time points, 3 YO devices (total 360 measurements, 24 results per sample).
- Sample Size for LoD/LoQ: Two samples (blank and low concentration), 5 YO3 devices, 2 lots of slides, 2 operators. Each level assayed 12 times on each device (60 results per level).
- Sample Size for Linearity: Semen samples prepared at ten concentration intervals (low to high). Tested in three YO devices per concentration level.
- Sample Size for Interference: Two concentration levels of semen samples and 11 potentially interfering substances.
- Data Provenance: In-house analytical studies. Semen samples collected following WHO 6th ed. manual guidance from consented donors. Analyzed in a blinded fashion on SQA-iO and SQA-V.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Clinical Study Ground Truth: The comparator device was the SQA-V sperm quality analyzer operated by TRAINED OPERATORS. The text does not specify the number or detailed qualifications of these "trained operators" beyond that.
- Analytical Studies Ground Truth: The text mentions "comparative device, SQA-V" and for LoD/LoQ, confirmation of concentration by "manual microscope." For training ground truth, it implies the use of the SQA-iO and SQA-V, as well as WHO 6th ed. guidelines.
4. Adjudication method for the test set:
- The text describes a "method comparison study" where "Each semen sample was tested in singleton in a blinded fashion by each method using split aliquots." This indicates a direct comparison to a reference standard (SQA-V) rather than an expert consensus adjudication of specific cases.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done.
- This device is designed for lay users (Over-The-Counter) and the study compares the device's performance to a professional laboratory device (SQA-V), not human readers with and without AI assistance. The "lay users" are the primary operators of the YO device, and their performance with the device is what's being evaluated against the SQA-V.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The analytical (bench) studies (Precision, LoD/LoQ, Linearity, Interference) assess the device's technical performance characteristics, which is essentially the "algorithm without human-in-the-loop" once the sample is loaded. However, the overall device function requires human interaction for sample preparation and device operation as instructed by the app.
- The clinical validation specifically compares the algorithm's performance when operated by intended lay users against results from the comparator device (SQA-V) operated by trained operators. So, while the underlying algorithm's accuracy is foundational, the clinical study explicitly includes human-in-the-loop for the test device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Clinical Study: The ground truth was established by comparison to a legally marketed predicate device, the SQA-V sperm quality analyzer, operated by trained professionals.
- Analytical Studies:
- Precision, Linearity, Interference: Comparison against the SQA-V comparator device.
- LoD/LoQ: Manual microscope verification for blank and low concentration samples.
8. The sample size for the training set:
- The document does not explicitly state the sample size used for the training set for the YO Home Sperm Test (YO 3.0) algorithms. It describes the data used for analytical validation and clinical validation, but not the development/training phase.
9. How the ground truth for the training set was established:
- The document does not detail how the ground truth for the training set was established. It states that the device "utilizes proprietary algorithms" and implements a "unique software algorithm to identify sperm and analyze the light fluctuations resulting from sperm movement." It also mentions "YO cut-offs are based on WHO 6th ed. reference values for semen parameters, statistical modeling, and expert publications." This implies the algorithms were developed and refined using data aligned with WHO standards and likely validated against reference methods like the SQA-V, but the specifics of the training data development are not provided.
Ask a specific question about this device
(157 days)
Cellular Analysis System; UniCel DxH 690T Coulter Cellular Analysis System Regulation Number: 21 CFR 864.5220
differential cell counter Classification Name: Counter, Differential Cell Regulation Number: 21 CFR 864.5220
The UniCel DxH 900/DxH 690T Coulter Cellular Analysis System is a quantitative, multi-parameter, automated hematology analyzer for in vitro diagnostic use in screening patient populations found in clinical laboratories.
The DxH 900/DxH 690T analyzer identifies and enumerates the following parameters:
· Whole Blood (Venous or Capillary): WBC, RBC, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV, NE%, NE#, LY%, LY#, MO%, MO#, EO%, EO%, BA%, BA#, NRBC%, NRBC#, RET%, RET#, MRV, IRF
· Pre-Diluted Whole Blood (Venous or Capillary): WBC, RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV
· Body Fluids (cerebrospinal, serous or synovial): TNC and RBC
The UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.
The UniCel DxH 900/DxH 690T System contains an automated hematology analyzer (DxH 900 or DxH 690T) designed for in vitro diagnostic use in screening patient populations by clinical laboratories. The system provides a Complete Blood Count (CBC), Leukocyte 5-Part Differential (Diff), Reticulocyte (Retic), Nucleated Red Blood Cell (NRBC) on whole blood, as well as, Total Nucleated Count (TNC), and Red Cell Count (RBC) on Body Fluids (cerebrospinal, serous and synovial).
The DxH Slidemaker Stainer II is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.
The DxH 900 System may consist of a workcell (multiple connected DxH 900 instruments with or without a DxH Slidemaker Stainer II), a stand-alone DxH 900, or a stand-alone DxH Slidemaker Stainer II. The DxH 690T System consists of a stand-alone DxH 690T instrument.
The provided text is a 510(k) Summary for a medical device submission (K240252) for the UniCel DxH 900/DxH 690T Coulter Cellular Analysis System and the UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System. This document focuses on demonstrating substantial equivalence to predicate devices rather than proving a device meets specific acceptance criteria as would be the case for a novel device or a device requiring clinical efficacy trials.
Therefore, the acceptance criteria are largely implied by the performance of the predicate device and established CLSI (Clinical and Laboratory Standards Institute) guidelines for analytical performance. The "study" proving the device met acceptance criteria is a series of non-clinical performance verification tests designed to demonstrate that the new devices (DxH 900, DxH 690T, SMS II) perform "as well as or better than" the predicate devices (DxH 800, SMS) across various analytical parameters.
Here's an attempt to structure the information based on your request, understanding that the context is substantial equivalence testing, not a novel device demonstrating de novo clinical acceptance.
Device Under Evaluation for Substantial Equivalence:
- UniCel DxH 900 Coulter Cellular Analysis System
- UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System
- UniCel DxH 690T Coulter Cellular Analysis System
Predicate Devices:
- UniCel DxH 800 Coulter Cellular Analysis System (K193124)
- UniCel DxH Slidemaker Stainer Coulter Cellular Analysis System (K162414)
The "acceptance criteria" for the new devices are generally linked to demonstrating performance comparable to, or better than, the predicate devices, adhering to established analytical performance standards (e.g., CLSI guidelines). The "study" involves various analytical performance tests comparing the subject devices to the predicate.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are generally implied by the predicate device's performance specifications and adherence to CLSI guidelines. The performance reported below demonstrates that the subject devices meet these implicit criteria (i.e., they perform comparably to the predicate). Due to the extensive list of parameters and a lack of explicit, single "acceptance limit" given for each, I will provide summary tables where possible, extrapolating from the comprehensive data provided in the text.
a. Repeatability (Within-run Imprecision)
- Acceptance Criteria (Implied): Percent Coefficient of Variation (%CV) or Standard Deviation (SD) within specified limits, typically reflecting clinically acceptable imprecision and comparable to predicate device performance.
- Reported Device Performance (DxH 900 - selected parameters; all passed):
Parameter | Units | Level | N | Test Result Mean | Test Result %CV or SD | Conclusion |
---|---|---|---|---|---|---|
WBC | x10³ cells/µL | 5.000 - 10.000 | 10 | 5.80 | 0.69% CV | Pass |
RBC | x10⁶ cells/µL | 4.500 - 5.500 | 10 | 4.72 | 0.50% CV | Pass |
Hgb | g/dL | 14.00 - 16.00 | 10 | 15.03 | 0.36% CV | Pass |
Platelet | x10³ cells/µL | 200.0 - 400.0 | 10 | 256.80 | 1.35% CV | Pass |
Neut % | % | 50.0 - 60.0 | 10 | 58.24 | 0.99 %CV | Pass |
Retic % | % | 0.000 - 1.500 | 10 | 1.17 | 0.13 SD | Pass |
BF-RBC | cells/mm³ | 10,000 - 15,000 | 10 | 12,643 | 2.42 %CV | Pass |
BF-TNC | cells/mm³ | 50-2,000 | 10 | 594 | 1.28 %CV | Pass |
(Similar comprehensive data provided for DxH 690T, all passed.)
b. Reproducibility (Across-site/day/instrument Imprecision)
- Acceptance Criteria (Implied): Reproducibility (Total CV%) to be within clinically acceptable limits, demonstrating consistent performance across different instruments, days, and runs. The test instruments met the reproducibility specifications for all parameters.
- Reported Device Performance (DxH 900-3S workcell - examples for Level 1, all passed):
Parameter | Unit | N (total) | Reproducibility CV% | Conclusion |
---|---|---|---|---|
WBC | 10^3 cells/uL | 90 | 2.30 | Pass |
RBC | 10^6 cells/uL | 90 | 0.66 | Pass |
HGB | g/dL | 90 | 0.52 | Pass |
PLT | 10^3 cells/uL | 90 | 1.64 | Pass |
BF TNC | cells/mm^3 | 90 | 7.28 | Pass |
BF RBC | cells/mm^3 | 90 | 3.97 | Pass |
c. Linearity
- Acceptance Criteria: Deviation between measured and predicted values to be within specified acceptance limits for each parameter across the analytical measuring interval (AMI). All instances showed "Pass".
- Reported Linearity Ranges (all passed on DxH 900-3S workcell):
Parameter | Units | Linearity Range Results |
---|---|---|
WBC | 10³ cells/µL | 0.064 - 408.5 |
RBC | 10⁶ cells/µL | 0.001 - 8.560 |
PLT | 10³ cells/µL | 3.2 - 3002 |
HGB | g/dL | 0.04 - 26.070 |
BF-RBC | cells/mm³ | 1113.10 - 6,353,906 |
BF-TNC | cells/mm³ | 31.50 - 92,745 |
d. Carryover
- Acceptance Criteria: Carryover to be below specified percentages/event counts (e.g., for WBC, RBC, Hgb, PLT limits 90):**
CnDR Mode | WBC | RBC | Hgb | PLT | Diff | NRBC | Retic |
---|---|---|---|---|---|---|---|
DxH 690T | 0.11% | 0.03% | 0.26% | 0.07% | 22, 20, 35 | 12, 5, 7 | 25, 20, 22 |
DxH 900 | 0.09% | 0.05% | 0.23% | 0.17% | 11,15,37 | 7,1,3 | 10,6,3 |
Spec | 120). |
* **Method Comparison (Whole Blood):** 735 whole blood specimens from 3 clinical sites (adult and pediatric samples).
* **Method Comparison (Body Fluid):** 195 body fluid specimens (BF TNC), 130 body fluid specimens (BF RBC) from multiple sites.
* **Flagging Analysis:** 735 whole blood samples (residual normal and abnormal) from three (3) clinical sites.
- Data Provenance: Data collected from multiple clinical sites (indicated for method comparison and flagging analysis), and testing included analysis on workcell configurations (DxH 900-3S workcell) as well as stand-alone instruments. The data appears to be prospective as it involves active testing of the new devices. The countries of origin are not specified but typical of FDA submissions, implies US-based or international sites compliant with US regulations.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not mention experts and their qualifications for establishing ground truth. This is a common characteristic of analytical performance studies for IVD devices like hematology analyzers. The "ground truth" for quantitative measurements (e.g., WBC, RBC counts) is typically the measurement itself obtained from a reference method or predicate device, often with rigorous calibration and quality control. For qualitative aspects like flagging, the ground truth is established by comparing the flag output to the predicate device's flag output, assuming the predicate's performance is already validated. There is no indication of human "expert" review for individual case ground truth for these types of measurements.
4. Adjudication Method for the Test Set
Since ground truth is based on predefined analytical measurements against reference methods/predicate devices rather than human interpretation, an adjudication method like 2+1 or 3+1 (common in image-based AI studies) is not applicable and not mentioned in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. These types of studies are typically performed for AI-assisted diagnostic devices where the AI is intended to improve human reader performance (e.g., radiologists interpreting images). This submission is for an automated hematology analyzer, where the device performs the analysis directly without human interpretation in the loop in the same way. The evaluation is focused on the device's analytical performance (accuracy, precision, linearity, etc.) compared to its predicate device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the core of this submission is a standalone performance evaluation of the DxH 900/690T systems. The "algorithm" here refers to the instrument's internal processing of raw signals to derive reported parameters (e.g., cell counts, differentials). Its performance is assessed independently of human intervention during the measurement process, and its output is compared to a reference standard (predicate device) and expected analytical capabilities.
7. The Type of Ground Truth Used
The ground truth for the analytical performance studies (repeatability, linearity, method comparison, etc.) is based on:
- Reference Method / Predicate Device Comparison: The performance of the UniCel DxH 900/DxH 690T is directly compared against the established performance of the legally marketed predicate UniCel DxH 800 and previously cleared UniCel DxH Slidemaker Stainer. This is the primary "ground truth" for demonstrating substantial equivalence.
- CLSI Guidelines: Adherence to established CLSI (Clinical and Laboratory Standards Institute) protocols (e.g., EP09c, H26-A2, EP05-A3, EP06-A, EP17-A2, EP28-A3c) implicitly defines the "truth" or expected range of acceptable analytical performance for these types of in vitro diagnostic devices.
- Control Materials and Calibrators: Certified reference materials and quality control products (e.g., COULTER 6C Cell Control, COULTER S-CAL Calibrator) are used to establish and verify instrument accuracy and precision, serving as a daily "ground truth" check.
- Fresh Patient Samples: Used in method comparison and other studies to ensure real-world performance reflects clinical conditions.
There is no mention of "expert consensus," "pathology," or "outcomes data" being used as ground truth in the traditional sense for these analytical performance studies of a hematology analyzer.
8. The Sample Size for the Training Set
The document does not provide information on the training set size. Hematology analyzers, while highly sophisticated, are typically rule-based systems or calibrated analytical instruments, rather than machine learning/AI models that require explicit "training sets" in the modern sense (e.g., thousands of labeled images for deep learning). Their "training" involves instrument calibration using standardized calibrators. If there are adaptive algorithms or older machine learning components, their "training" data is internal to the manufacturer's development process and not typically disclosed in a 510(k) summary focused on analytical validation.
9. How the Ground Truth for the Training Set Was Established
As noted above, the concept of a "training set" and its "ground truth" in the context of a modern AI/ML device is not directly applicable to this traditional analytical instrument unless it has specific, undisclosed internal adaptive algorithms. The "ground truth" for the instrument's operational accuracy and precision is primarily established through its calibration process using certified calibrator materials and verified against quality control materials and comparisons to reference methods or predicate devices as part of its manufacturing and analytical validation.
Ask a specific question about this device
(244 days)
Montpellier Cedex 4, 34184 France
Re: K232946
Trade/Device Name: Yumizen H2500 Regulation Number: 21 CFR 864.5220
-------------------|----------------------------------------|------------|
| GKZ | II | 864.5220
|
| Regulation | 21 CFR 864.5220
| 21 CFR 864.5220
The Yumizen H2500 is a quantitative multiparameter fully automated hematology analyzer intended for in-vitro diagnostic use in clinical laboratories by qualified healthcare professionals for the screening of patient populations.
The Yumizen H2500 is intended to perform tests on the following specimens:
- Anticoagulated whole blood specimens ●
- Body fluids (synovial fluids, serous fluids and cerebrospinal fluids). .
The Yumizen H2500 classifies and enumerates the following parameters:
- A complete blood count (CBC) consisting of TNC, WBC, RBC, HGB, calculated . HCT, MCV, calculated MCH, calculated MCHC, RDW-SD, RDW-CV, PLT, PLT-Ox, LPF, MPV.
- A leukocyte differential count consisting of LYM (%#), MON (%#), NEU (%/#), ● EOS (%/#), BAS (%/#), IMG (%/#)
- A nucleated red blood cell count consisting of NRBC (%/#). ●
- A reticulocyte analysis consisting of RET (%/#), calculated CRC, IRF, RHCC. ●
- Quantitative determination of blood cells in synovial fluids, serous fluids and . cerebrospinal fluids consisting of BFWBC, BFRBC, BFPN (%/#), BFMN (%/#).
Note: Venous and capillary whole blood should be collected in K2EDTA anticoagulant. Serous and synovial fluids should be collected without anticoagulant or in K2EDTA anticoagulant to prevent clotting of fluid. The use of anticoagulants with cerebrospinal fluid specimens is neither required nor recommended. Alternatively, Sodium Heparin or Lithium Heparin may be used for synovial fluid.
The HORIBA Medical analyzer modules Yumizen H2500 are multi-parameter hematology analyzers intended to perform tests on whole blood samples collected in K2EDTA and body fluids (synovial and serous) collected in K2EDTA anticoaqulant. The analyzers can also perform tests on cerebrospinal fluids which should not be collected in any anticoaqulant.
The Analyzer Units (Yumizen H2500) aspirate, dilute, mix, and analyze blood and body fluid samples.
The Yumizen H2500 model provides Complete Blood Count (CBC), Differential (DIFF), Reticulocyte counts (RET) and Optical Platelet counts as well as Body Fluid counts (BF).
The analyzer models may function with:
- · a Data Management Unit (Yumizen P8000) which is the interface with the laboratory connections (LIS) and the Analyzer Unit(s). Through its interface, the Yumizen P8000 enables the user to monitor the workflow of patient data, centralize result data, perform reflex testing, customize rules, centralize the validation operations, run quality control, manage quality assurance on results.
1. A table of acceptance criteria and the reported device performance
Since specific acceptance criteria values were not explicitly stated for all performance aspects, I will infer them as generally "met predefined acceptance criteria" or "demonstrated comparable performance" where stated in the document.
Test Category | Parameter | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|---|
Analytical Performance | |||
Repeatability (Whole Blood) | All parameters | All components of variation met predefined acceptance criteria. | Max %CV and Max SD values were reported for various parameters across normal, low, and high target ranges (Tables 4 and 5) and were found to meet the predefined acceptance criteria. For example, normal WBC has Max %CV of 2% and Max SD of 0.150, while high WBC (10-30 10^9/L) has Max %CV of 1.9% and Max SD of 0.330. |
Repeatability (Body Fluid) | All parameters | All components of variation met predefined acceptance criteria. | Max %CV and Max SD values were reported for various parameters (BFWBC, BFRBC, BFPN#, BFPN%, BFMN#, BFMN%) across different levels and fluid types (serous, synovial, CSF) (Tables 6, 7, 8) and were found to meet the predefined acceptance criteria. For example, BFWBC (Level 1) for serous fluids has Max %CV of 10.4% and Max SD of 8.5. |
Reproducibility (Whole Blood) | All parameters | Met acceptance criteria per CLSI EP05-A3. | Detailed SD and CV% reported for within-run, between-run, between-day, and between-site variations for whole blood control materials (ABX Difftrol - Table 9, ABX Minotrol Retic - implicit in text, though table only provided for Difftrol). All results met the acceptance criteria. For example, WBC (Low) total CV% was 3.62%; HGB (Normal) total CV% was 0.89%. |
Reproducibility (Body Fluids) | All parameters | Met acceptance criteria per CLSI EP05-A3. | Detailed SD and CV% reported for within-run, between-run, between-day, and between-site variations for body fluid control material (BFTROL - Table 11). All results met the acceptance criteria. For example, BFWBC (Level 2) total CV% was 6.34%; BFRBC (Level 3) total CV% was 3.55%. |
Linearity (Whole Blood) | WBC, TNC, RBC, HGB, HCT, PLT, PLT-Ox, RET#, NRBC# | All results met predefined acceptance criteria and were acceptable. | Linearity ranges were established for each parameter (Table 12). For example, WBC: 0.06 – 344.50 10^9/L; HGB: 0.5 – 25.8 g/dL. |
Linearity (Body Fluids) | BFWBC, BFRBC | All results met predefined acceptance criteria and were acceptable. | Linearity ranges were established for BFWBC (3 – 11 345 10^6/L) and BFRBC (1079 – 5 394 633 10^6/L) (Table 13). |
Interferences (Whole Blood) | All parameters | No interference detected or no significant effect. Conjugated bilirubin may have a significant effect on low HGB levels. | The device was found not susceptible to interference from Hemoglobin, Lipemia, Bilirubin (except conjugated bilirubin on low HGB), Glucose, and Yeast for various parameters. Intrinsic interferences from elevated WBC, RBC, and PLT measurands showed no interference for specific parameters. Some potential impacts (e.g., PLT-Ox and LYM# from macrothrombocytosis; RDW-SD from dual RBC population) were noted, but overall deemed acceptable within context of the study (Table 15). |
Interferences (Body Fluids) | All parameters | No interference detected or no significant effect. Interference from yeast was detected. | No significant effect was detected on BFWBC, BFRBC from Hemoglobin, Lipemia, Bilirubin, and Total Protein across various fluid types. Interference from yeast was detected. Known interferences from crystals and liposomal particles were acknowledged as per literature (Table 17). |
Stability (Whole Blood) | All parameters | Acceptance criteria for each parameter met for defined time intervals. | Whole blood samples are stable for 24h at room temperature for CBC/LMNE/NRBC/RET parameters, and 48h (CBC/LMNE/NRBC) or 72h (RET) at refrigerated temperature (Table 18). |
Stability (Body Fluid) | All parameters | Acceptance criteria for sample stability (max bias) met. | Serous and synovial fluids are stable for 24h at room temperature for BFWBC/BFRBC/BFPN/BFMN parameters. CSF is stable for 4h at room temperature for BFWBC/BFRBC/BFPN/BFMN parameters (Table 19). |
Detection Limits (Whole Blood) | WBC, TNC, RBC, HGB, HCT, PLT, PLT-Ox, RET# | All results met predefined acceptance criteria and were acceptable. | LoB, LoD, and LoQ values were determined for various parameters (Table 20). For example, WBC: LoB 0.05, LoD 0.07, LoQ 0.10 (10^9/L). |
Detection Limits (Body Fluid) | BFWBC, BFRBC | All results met predefined acceptance criteria and were acceptable. | LoB, LoD, and LoQ values were determined for BFWBC (LoB 2, LoD 4, LoQ 5 (10^6/L)) and BFRBC (LoB 500, LoD 1000, LoQ 1500 (10^6/L)) (Table 21). |
Carry-over (Whole Blood) | All parameters | All carry-over results are within specifications. | Not applicable - Carry-over results are within specifications. |
Carry-over (Body Fluids) | All parameters | All carry-over results are within specifications. | Not applicable - Carry-over results are within specifications. |
Comparison Studies | |||
Method Comparison (Whole Blood) | All parameters | All results were within the predefined acceptance criteria and acceptable. Yumizen H2500 demonstrated comparable performance to predicate device. | Passing-Bablok regression analysis (r, slope, intercept with 95% CI) was performed (Table 22). Correlations ranged from 0.184 (MCHC, potentially an outlier or specific context needed) to 0.998 (HGB). Most parameters showed high correlations and slopes close to 1, with intercepts close to 0, indicating strong agreement with the predicate device. For example, WBC: r=0.996, slope=1.012, intercept=0.045. |
Method Comparison (Body Fluids) | All parameters | All results were within the predefined acceptance criteria and acceptable. Yumizen H2500 demonstrated comparable performance to predicate device. | Passing-Bablok regression analysis (r, slope, intercept with 95% CI) was performed for synovial, serous, and CSF (Tables 23, 24, 25). Correlations varied, with BFRBC showing very high correlation (0.999-1.000) across all fluid types. Other parameters showed good correlations (e.g., BFWBC r=0.923-0.980, BFMN% r=0.816-0.967) indicating comparable performance. For example, CSF BFWBC: r=0.980, slope=0.99, intercept=5.14. |
Comparability (Sampling types) | All parameters | Acceptance criteria met for all parameters at all levels. | Bias estimated at low, mid, and high points for each parameter showed comparability between capillary and venous whole blood samples. |
Comparability (Anticoagulants) | All parameters | No difference linked to anticoagulant or significant effect linked to matrix observed. | Visual examination of Bland-Altman difference plots showed no difference linked to K2EDTA, Lithium Heparin, or Sodium Heparin anticoagulants for synovial fluid. No difference linked to anticoagulant for K2EDTA in serous fluids. |
Comparability (Analytical Modes) | All parameters | Acceptance criteria met for all parameters at all levels. | Bias estimated at low, mid, and high points for each parameter demonstrated comparable performance characteristics for all Yumizen H2500 modes (DIF, DIR, RBC_PLTO, DIF_LV). |
Comparability (Manual vs Auto) | All parameters | Acceptance criteria met for all parameters at all levels. | Bias estimated at low, mid, and high points for each parameter demonstrated comparable performance characteristics between automatic rack mode and manual (STAT) mode. |
Clinical Sensitivity | Morphological Flags, Distributional Abnormality, Combined Flags | Met predefined acceptance criteria for both sensitivity and specificity. | Sensitivity: 80.5% (Morphological), 91.9% (Distributional), 90.0% (Combined). Specificity: 83.6% (Morphological), 92.7% (Distributional), 90.4% (Combined). Efficiency: 82.2% (Morphological), 92.1% (Distributional), 90.1% (Combined) (Table 27). |
Expected Values/Reference Range | All parameters | Establishment of reference intervals. | Reference intervals were established for adult (male/female) and pediatric (neonate, infant, child, adolescent) whole blood samples, and for synovial, serous, and CSF body fluids (Tables 28, 29, 30). This demonstrates the ability to define expected values for various populations. |
2. Sample sizes used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Repeatability:
- Whole Blood: 116 residual K2EDTA whole blood samples (mixed normal and contrived for extremes) used for within-run repeatability.
- Body Fluid: 87 residual body fluid samples (Synovial, Serous and Cerebrospinal Fluids) (mixed normal and contrived for extremes) used for within-run repeatability.
- Provenance: Retrospective, samples around medical decision levels or contrived to cover analytical measuring range. Collected from multiple sites (4 test sites for whole blood, 4 test sites for body fluid). Country of origin is not explicitly stated beyond "4 test sites" which include "2 US sites and 2 European sites" for some studies, implying data from both regions.
- Reproducibility:
- Whole Blood: Three levels of control material (ABX Difftrol and ABX Minotrol Retic) run in duplicate twice a day for a minimum of 20 days (total 320 runs per level for each parameter if 4 sites and 2 runs/day * 20 days * 2 replicates * 2 instruments = 320 runs).
- Body Fluids: Two levels of control material (BFTROL) run in duplicate twice a day for a minimum of 20 days (total 320 runs per level for each parameter).
- Provenance: Control materials, conducted at 4 test sites (4 instruments, one per site).
- Linearity:
- Whole Blood: Minimum of seven concentration levels (commercial or prepared from dilutions) for each parameter. Each level tested in a minimum of 4 replicates on 2 instruments using 2 reagent lots.
- Body Fluids: Mimimum of seven concentration levels (prepared from dilutions) for each parameter. Each level tested in a minimum of 4 replicates on 2 instruments using 2 reagent lots.
- Provenance: One test site (for both whole blood and body fluids).
- Interferences:
- Whole Blood: Two concentration levels of interferent (Hemoglobin, Lipemia, Bilirubin, Glucose, Yeast) for direct interference. A subset of samples from the method comparison study (unique, native whole blood specimens identified with potential interference analysis, minimum of 17 specimens per interferent) for intrinsic interferences.
- Body Fluids: Two concentration levels of interferent (Hemoglobin, Lipemia, Bilirubin, Total Protein, Yeast) for direct interference. A subset of samples from the method comparison study (unique, native body fluid specimens identified with potential interferents) for intrinsic interferences.
- Provenance: 4 test sites (for both whole blood and body fluids).
- Stability:
- Whole Blood: 14 whole venous blood specimens.
- Body Fluid: 28 body fluid specimens (13 serous, 7 synovial, and 8 cerebrospinal fluids).
- Provenance: 3 test sites for whole blood, 2 test sites for body fluid.
- Detection Limits (LoB, LoD, LoQ):
- Six blank samples, six low concentration samples (for LoD), at least four low concentration samples (for LoQ). Each sample run 10 repeated times.
- Provenance: Not specified beyond "different analyzers" and "two different reagent lots".
- Carry-over:
- High and low target value samples run consecutively (3 high, 3 low) for each parameter. Three sets of carry-over sequences.
- Provenance: One test site, three analyzers.
- Method Comparison:
- Whole Blood: 969 venous and/or capillary specimens (pediatric (≤21 years) and adult). 143 with known medical conditions. Maximum 10% contrived.
- Body Fluids: 427 residual body fluid specimens (pediatric (≤21 years) and adult) - 174 synovial, 138 serous, 115 CSF.
- Provenance: 4 clinical sites (2 US sites and 2 European sites for whole blood); 3 clinical sites (2 US sites and 1 European site for body fluids). Mostly retrospective (leftover specimens).
- Matrix Comparison (Comparability between sampling types):
- 84 normal and pathological paired capillary and venous whole blood specimens.
- Provenance: One clinical site. Prospective collection.
- Matrix Comparison (Comparability between body fluid anticoagulants):
- Synovial fluid: 9 without anticoagulant, 39 with K2EDTA, 92 with Lithium Heparin, 34 with Sodium Heparin.
- Serous fluid: 82 without anticoagulant, 56 with K2EDTA.
- Provenance: 3 clinical sites.
- Matrix Comparison (Comparability between analytical modes):
- DIR vs DIF: 166 normal and pathological residual whole blood specimens.
- RBC_PLTO vs DIF: 172 normal and pathological residual whole blood specimens.
- DIF_LV vs DIF: 187 normal and pathological residual whole blood specimens.
- Provenance: One clinical site for each comparison.
- Matrix Comparison (Comparability mode to mode):
- 83 normal and pathological residual whole blood samples. (Automatic rack mode vs manual STAT mode).
- Provenance: One clinical site.
- Clinical Sensitivity:
- 456 residual normal and abnormal whole blood samples (from method comparison study).
- Provenance: 4 clinical sites.
- Expected Values/Reference Range:
- Adult Whole Blood: 240 apparently healthy adults (120 male, 120 female).
- Pediatric Whole Blood: At least 80 apparently healthy neonates, infants, children, and adolescents (
Ask a specific question about this device
Page 1 of 20