Search Results
Found 3 results
510(k) Data Aggregation
(67 days)
SYSMEX CORPORATION OF AMERICA
The Sysmex® XT-Series is intended for in vitro diagnostic use in the clinical laboratory as a multi-parameter hematology analyzer.
The XT-Series is an automated hematology analyzer which consists of four principle units: (1) Main Unit which aspirates, dilutes, mixes, and analyzes whole blood samples; (2) Sampler Unit which supplies samples to the Main Unit automatically: (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system: (4) Pneumatic Unit which supplies pressure and vacuum from the Main Unit.
The provided document is a 510(k) summary for the Sysmex® XT-Series Automated Hematology Analyzer. It describes the device, its intended use, and claims substantial equivalence to a predicate device (Sysmex® XE-2100). However, it does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the comprehensive study that proves the device meets those criteria.
Here's what can be extracted and what information is missing:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" with numerical targets for performance. Instead, it claims "Studies were performed to evaluate the equivalency of the XT-Series to the XE-2100. Results indicated equivalent performance." This implies the acceptance criterion was "equivalent performance" to the predicate device.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Equivalent to Sysmex® XE-2100 | "Results indicated equivalent performance" |
The comparison table on page 1 of the document focuses on features and parameters rather than specific performance metrics (e.g., accuracy, precision, sensitivity, specificity). It states "Performance: Same" for the XT-Series compared to the XE-2100, which has "Proven performance in FDA submission." This is a high-level statement rather than specific numerical performance data.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. For an automated hematology analyzer, the "ground truth" would typically be established by reference methods, pathologist review of slides, or other established laboratory techniques, rather than "experts" in the context of image interpretation. However, the document doesn't detail how the truth was established for the comparison studies.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This type of study is not applicable to this device. This is an automated hematology analyzer, not an AI-assisted diagnostic tool that humans would use for interpretation. The comparison is between two automated devices.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, this is a standalone device. The entire purpose of the 510(k) is to demonstrate the performance of the XT-Series analyzer itself. The "Clinical Performance Data" section on page 0 describes studies "to evaluate the equivalency of the XT-Series to the XE-2100," indicating a standalone performance comparison.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state how "ground truth" was established for the performance studies. For hematology analyzers, "ground truth" for parameters like WBC, RBC, HGB, PLT, and differential counts would typically involve:
- Reference methods (e.g., manual differential counts, international reference standards for HGB).
- Correlation with clinically relevant patient outcomes (less likely for a 510(k) for an analyzer).
- Cross-validation with known control materials.
The document only states that "Studies were performed to evaluate the equivalency of the XT-Series to the XE-2100." This implies the predicate device (XE-2100) itself served as a de facto reference or a well-established standard against which the new device was compared.
8. The sample size for the training set
This information is not applicable and not provided. This device is not described as using machine learning or AI that would require a separate "training set." It is an automated analyzer based on physical principles like flow cytometry and DC detection (as described in the "Principles" section).
9. How the ground truth for the training set was established
This information is not applicable as there is no mention of a training set for an AI/ML model.
Ask a specific question about this device
(57 days)
SYSMEX CORPORATION OF AMERICA
The HPC (hematopoietic progenitor cell) parameter of the IMI Channel on the Sysmex® SE-9500 and XE-2100 for in Vitro Diagnostics is used as a screen for the optimal presence of hematopoietic progenitor cells in peripheral blood and cord blood samples.
The SE-9500 and XE-2100 have an immature myeloid information (IMI) channel, which identifies and enumerates immature cells in addition to the traditionally reported parameters of an automated cell differential. (Note: Special software/hardware is required to obtain results described.)
The provided text describes a 510(k) submission for the "HPC (Hematopoietic Progenitor Cell) parameter on the IMI Channel of the Sysmex® SE-9500 and XE-2100, Automated Hematology Analyzer." The document does not explicitly state "acceptance criteria" and "reported device performance" in a structured table or use these specific terms. However, it does present a "Comparison Table to Predicate Methods" that outlines the characteristics and performance of the new HPC parameter in relation to predicate devices (Colony Forming Unit (CFU) and Total Nucleated Count (TNC)) and a routine method (Flow Cytometry CD34+).
Based on the provided information, here's an attempt to extract the requested details:
1. Table of acceptance criteria and the reported device performance
The document does not explicitly define quantitative "acceptance criteria" like thresholds for accuracy, sensitivity, or specificity. Instead, it relies on demonstrating "substantial equivalence" to predicate methods, particularly in terms of "accuracy" (correlation to CFU).
Acceptance Criteria (Inferred from comparison to predicates) | Reported Device Performance (HPC parameter of IMI Channel) |
---|---|
Intended Use: Screen for optimal presence of progenitor cells in stem cell harvest & cord blood. | Same as predicate methods (CFU, TNC, Flow Cytometry CD34+). |
Methodology: Hematopoietic progenitor cell count from hematology analyzer. | Hematopoietic progenitor cell count from hematology analyzer. |
Anticoagulant Type: EDTA (for HPC). | EDTA. |
Specimen Type: Peripheral blood & cord blood. | Peripheral blood & cord blood. |
Accuracy: Comparison to CFU should show good correlation. | Comparison to CFU showed good correlation. |
Time Required (per sample): Short (90 seconds, comparable to TNC). | 90 seconds. |
Cost (per sample): Low (Approx $1.35, comparable to TNC). | Approx $1.35. |
Quality of Technical Support: Hematology laboratory personnel; Run in duplicate. | Hematology laboratory personnel; Run in duplicate. |
Study Proving Acceptance Criteria (Substantial Equivalence)
The study described is a comparison of the new HPC parameter to established predicate methods.
2. Sample size used for the test set and the data provenance
The document does not specify the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective). It only states: "the HPC parameter to the predica indicated equivalent performance. The performance data demonstrated substantial equivalence."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide information on the number of experts used or their qualifications for establishing ground truth.
4. Adjudication method for the test set
The document does not describe any adjudication method.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is not an MRMC study. The device is an automated hematology analyzer parameter, not an AI-assisted interpretation tool for human readers. Therefore, this question is not applicable.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the performance described is that of the standalone device (HPC parameter on the Sysmex® SE-9500 and XE-2100 automated hematology analyzers). It functions without human-in-the-loop interpretation for its primary output. The "Quality of Technical Support" section mentions "Hematology laboratory personnel; Run in duplicate," which refers to the operation of the device and quality control checks, not human interpretation of the device's primary result.
7. The type of ground truth used
The primary ground truth appears to be the Colony Forming Unit (CFU) method. The document states: "Method of real counting of progenitor cells established as reference method" for CFU, and the accuracy of the HPC parameter (as well as TNC and Flow Cytometry CD34+) is assessed by "Comparison to CFU showed good correlation."
8. The sample size for the training set
The document does not provide information on the sample size used for training, nor does it explicitly describe a distinct "training set" in the context of typical machine learning models. This is a hematology analyzer parameter, and its development would likely involve calibration and validation rather than what is typically understood as an ML training set.
9. How the ground truth for the training set was established
Since a "training set" in the machine learning sense is not explicitly discussed, the establishment of ground truth for development/calibration would inherently rely on the same established methods, primarily the Colony Forming Unit (CFU) method, which is considered the "reference method" for real counting of progenitor cells.
Ask a specific question about this device
(81 days)
SYSMEX CORPORATION OF AMERICA
Ask a specific question about this device
Page 1 of 1