Search Results
Found 3 results
510(k) Data Aggregation
(304 days)
Sysmex UF-5000 Fully Automated Urine Particle Analyzer
The Sysmex® UF-5000 Fully Automated Urine Particle Analyzer is an automated urine particle analyzer for in vitro diagnostic use in screening patient populations found in clinical laboratories. The Sysmex® UF-5000 Fully Automated Urine Particle Analyzer analyzes the following parameters in urine samples: RBC, WBC, Epithelial cells, Cast, Bacteria and flags the presence of the following: Pathologic Cast, Crystals, Sperm, Yeast like cell and Mucus.
The Sysmex® UF-5000 Fully Automated Urine Particle Analyzer is an automated urine particle analyzer that is used in the clinical laboratory to analyze formed elements in urine samples quantitatively and flag for the presence of particles/cells in the sample. It provides screening of abnormal samples, as well as automation and better efficiency in the laboratory. The analyzer reports analysis results on five enumerated parameters in urine: RBC (Red Blood Cells), WBC (White Blood Cells), EC (Epithelial Cells), CAST and BACT (Bacteria). It also reports flagging information on the following parameters in urine: Pathologic Cast; Crystal; Sperm; Yeast like cell; and Mucus. This flagging information alerts the operator for the need of further testing and/or review.
The Sysmex® UF-5000 Fully Automated Urine Particle Analyzer is a dedicated system for the analysis of microscopic formed elements in urine and uses a Microsoft® Windows Operating System. The analyzer consists of the following units: (1) Main Unit which aspirates, dilutes, mixes and analyzes urine samples and processes data from the main unit and provides the operator interface with the system; (2) Sampler Unit which supplies samples to the main unit automatically; and (3) Pneumatic Unit which supplies pressure and vacuum to the main unit.
The analyzer uses five reagents-UF-CELLSHEATH (sheath reagent), UF-CELLPACK CR and UF-CELLPACK SF (diluents) and UF-Fluorocell CR and UF-Fluorocell SF (both stains). The quality control material is UF-CONTROL.
The provided text describes the performance data and conclusions for the Sysmex® UF-5000 Fully Automated Urine Particle Analyzer, seeking to prove its substantial equivalence to the predicate device, the Sysmex® UF-1000i. This is a submission for a 510(k) premarket notification, which focuses on demonstrating equivalence rather than establishing novel claims.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based only on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not present a formal table of specific acceptance criteria (e.g., target accuracy percentages, precision ranges) that the Sysmex® UF-5000 had to meet. Instead, it describes general categories of performance testing conducted to demonstrate "equivalent performance" to the predicate device:
Performance Category | Reported Device Performance |
---|---|
Limits of Blank, Detection, Quantitation (LoB/LoD/LoQ) | Testing was conducted. (Specific values or comparison to predicate's LoD/LoQ are not detailed in this summary, but the implication is that they are comparable or better, consistent with the smaller minimum particle size detected.) |
Linearity | Testing was conducted. (Specific ranges or linearity coefficients are not detailed.) |
Precision (Repeatability & Reproducibility) | Testing was conducted. (Specific CVs or precision limits are not detailed.) |
Carryover | Testing was conducted. (Specific thresholds or results are not detailed.) |
Specimen Stability | Testing was conducted. (Specific stability periods or criteria are not detailed.) |
Reference Interval | Establishment of reference intervals was part of the evaluation. (The specific intervals or how they were established are not detailed.) |
Method Comparison (Accuracy) | Clinical and analytical validation testing were conducted to show equivalent performance to the predicate Sysmex® UF-1000i analyzer. "Accuracy (method comparison)" was included in the evaluation. (Specific correlation coefficients, bias, or agreement rates for parameters like RBC, WBC, etc., are not provided in this summary section of the 510(k). It only states that the evaluation "established that the performance, functionality, and reliability... are substantially equivalent.") |
2. Sample size used for the test set and the data provenance
The document does not explicitly state the sample size used for the test set. It mentions "Clinical and analytical validation testing" and "Method Comparison" but provides no numbers of samples or patients.
Regarding data provenance: The document identifies Sysmex America Inc. (Illinois, USA) as the submitter. While it doesn't explicitly state the country of origin for the clinical samples, it's highly likely to be the USA, given the submitter's location and the FDA submission context. The study is implicitly prospective in nature, as it involves newly conducted validation testing for a new device to demonstrate its performance characteristics.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide any information about the number or qualifications of experts used to establish ground truth for the test set. For a device like an automated cell counter, the "ground truth" for method comparison studies is typically established by comparing its results against a "gold standard" or reference method, which might be manual microscopy performed by trained laboratory professionals (medical technologists, clinical pathologists), or by comparing against the predicate device itself. However, the text does not elaborate on this.
4. Adjudication method for the test set
The document does not describe any adjudication method. Given that the device is an automated cell counter, the "ground truth" would likely be established through a reference laboratory method rather than a panel of human adjudicators in the way an imaging AI might use.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This type of study is more common for diagnostic imaging devices where human interpretation is a primary component of the diagnostic pathway. For an automated laboratory analyzer, the performance is assessed against reference methods and statistical agreement with the predicate.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, the studies described are inherently "standalone" in the context of the device's operation. The Sysmex® UF-5000 Fully Automated Urine Particle Analyzer is an automated device designed to analyze urine samples and report parameters directly. The performance evaluation (LoB/LoD/LoQ, linearity, precision, carryover, method comparison) assesses the device's analytical performance on its own, without direct real-time human intervention in the analysis process itself. Human interpretation of the results (e.g., flagging information leading to further testing) is part of its intended use, but the analytical performance is standalone.
7. The type of ground truth used
The document implies that the ground truth for the performance studies, particularly "Method Comparison," would be established by comparing the Sysmex® UF-5000's results against those of the predicate device (Sysmex® UF-1000i) and/or other established laboratory reference methods for urine particle analysis. It does not explicitly state which ultimate ground truth was used (e.g., pathology, manual microscopy, or clinical outcomes data). For quantitative parameters like RBC and WBC counts in urine, the "ground truth" often refers to the accepted values obtained from a reference measurement method.
8. The sample size for the training set
This document describes a 510(k) submission for an automated laboratory instrument, not a machine learning or AI model in the modern sense that typically involves "training sets." The "algorithm" or measurement principles (flow cytometry, laser detection, specific reagents) are embedded in the device's design. Therefore, the concept of a "training set" as it applies to AI models is not relevant here, and no information on a training set sample size is provided.
9. How the ground truth for the training set was established
As explained above, the concept of a "training set" for the Sysmex® UF-5000 as an automated instrument is not applicable in the same way it would be for AI/ML algorithms. The device's operational parameters and internal algorithms are based on established scientific principles of flow cytometry and are likely refined during product development and engineering, rather than "trained" on a dataset in the AI sense.
Ask a specific question about this device
(118 days)
SYSMEX UF-500I AUTOMATED URINE PARTICLE ANALYZER
The Sysmex® UF-500i is an automated urine particle analyzer for in vitro diagnostic use in screening patient populations found in clinical laboratories. The UF-500i analyzes the following parameters in urine samples: RBC, WBC, Epithelial Cells, Cast, and Bacteria and flags the presence of the following: Pathologic Cast, Crystal, Sperm, Small Round Cell, Yeast like cell and Mucus.
The Sysmex® UF-500i, an automated urine particle analyzer, is a dedicated system for the analysis of microscopic formed elements in urine specimens. The instrument consists of three principal units: (1) Main Unit which aspirates, dilutes, mixes and analyzes urine samples; (2) Auto Sampler Unit supplies samples to the Main Unit automatically; (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system. The UF-500i is equipped with a Sampler that provides continuous automated sampling for up to 60 tubes. The instrument utilizes Sysmex flow cytometry using a red semiconductor laser for analyzing organized elements of urine. Particle characterization and identification is based on detection of forward scatter, fluorescence and adaptive cluster analysis. Using its own reagents, the UF-500i automatically classifies organized elements of urine and carries out all processes automatically from aspiration of the sample to outputting the results. Analysis results and graphics are displayed on the IPU screen. They can be printed on any of the available printers or transmitted to a Host computer.
The provided document primarily consists of a 510(k) summary for the Sysmex® UF-500i, an automated urine particle analyzer. It focuses on demonstrating substantial equivalence to a predicate device (Sysmex® UF-1000i) rather than presenting a standalone study with detailed acceptance criteria and performance against those criteria in a typical clinical study format.
Therefore, many of the requested details regarding acceptance criteria, sample sizes, ground truth establishment, expert qualifications, and MRMC studies are not explicitly stated within this 510(k) summary. These types of detailed studies are generally performed during the development and validation phases and are summarized or referenced in the 510(k) where substantial equivalence to a known predicate is the primary claim.
Here's an attempt to answer your questions based on the available information:
1. Table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative "acceptance criteria" in the format of a clinical study with specific thresholds for sensitivity, specificity, accuracy, etc. Instead, it refers to "method and flagging comparison studies along with reference interval comparison to the UF-1000i," concluding that "there is no difference between the UF-1000i and the UF-500i."
This implies that the acceptance criterion was demonstrating "no difference" or substantial equivalence to the predicate device (Sysmex® UF-1000i) in terms of its ability to analyze and flag specific parameters.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Substantial equivalence to predicate device (Sysmex® UF-1000i) in terms of analytical performance for: | "Method and flagging comparison studies along with reference interval comparison to the UF-1000i were performed and there is no difference between the UF-1000i and the UF-500i." |
- RBC analysis | Reported as "no difference" compared to UF-1000i. |
- WBC analysis | Reported as "no difference" compared to UF-1000i. |
- Epithelial Cells analysis | Reported as "no difference" compared to UF-1000i. |
- Cast analysis | Reported as "no difference" compared to UF-1000i. |
- Bacteria analysis | Reported as "no difference" compared to UF-1000i. |
- Flagging of Pathologic Cast, Crystal, Sperm, Small Round Cell, Yeast-like cell, and Mucus | Reported as "no difference" compared to UF-1000i. |
2. Sample size used for the test set and the data provenance
The document does not specify the exact sample size used for the comparison studies. It also does not explicitly state the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the 510(k) summary.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided in the 510(k) summary.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC study is not applicable here because the device is an automated urine particle analyzer. It processes urine samples directly and classifies elements, not assisting human readers with interpretation of images. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not apply to this device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the device (Sysmex® UF-500i) is an automated urine particle analyzer. It operates as a standalone algorithm (flow cytometry and adaptive cluster analysis) without human-in-the-loop for its primary analysis and classification of particles. The comparison studies described in the 510(k) summary would have evaluated this standalone performance against the predicate device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document implies that the ground truth for comparison was the performance of the predicate device, the Sysmex® UF-1000i. The studies focused on demonstrating that "there is no difference" between the UF-500i and the UF-1000i, meaning the UF-1000i's established performance served as the reference or "ground truth" for the comparison. It indicates "method and flagging comparison studies" which would typically involve comparing the results of both instruments on the same samples.
8. The sample size for the training set
The document does not provide information about a "training set" as this device is a substantial equivalence claim to an existing technology (flow cytometry with adaptive cluster analysis) rather than a novel AI/ML algorithm that would typically undergo explicit training on a large dataset. The underlying analysis principles are established.
9. How the ground truth for the training set was established
As there's no explicit mention of a "training set" for a novel AI/ML algorithm in this 510(k) summary, this question is not directly applicable. The device relies on established flow cytometry principles and classification algorithms.
Ask a specific question about this device
(64 days)
SYSMEX UF-50
The Sysmex UF-50 is a fully automated urine cell analyzer intended for in vitro diagnostic use in urinalysis within the clinical laboratory. The UF-50 replaces microscopic review of normal/abnormal specimens and flags specimens containing certain abnormalities which indicate the need for further testing. Laboratorians are responsible for final microscopic review of flagged abnormalities.
The UF-50 is a fully automated urine cell analyzer for urinalysis in clinical laboratories. It analyzes formed elements in urine using flow cytometry technology.
The provided document describes the Sysmex UF-50, a fully automated urine cell analyzer. The document focuses on its substantial equivalence to a predicate device, the Sysmex UF-100.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state numerical acceptance criteria for the UF-50. Instead, the primary "acceptance criterion" is demonstrated by establishing substantial equivalence to the predicate device, the Sysmex UF-100. The reported device performance is that the "correlation results performance correlated to those of the two analyzers, therefore supporting the claim of substantial equivalence."
Since exact numerical criteria are not given, a table would not be directly comparable. However, the qualitative performance is aligned with the predicate device.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: The document does not specify the exact sample size for the correlation studies. It generally refers to "correlation studies."
- Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or the country of origin.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not explicitly provided. Given that the study focuses on correlating the UF-50 to a predicate device (UF-100) and that the UF-50 flags specimens for "final microscopic review of abnormalities" by laboratorians, it's highly probable that a "ground truth" might have involved microscopic review by qualified laboratorians, but the number and qualifications are not detailed.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method for establishing ground truth, as the primary comparison is between the UF-50 and UF-100.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader multi-case comparative effectiveness study was not done. The study compares the UF-50 (an automated analyzer) to another automated analyzer (UF-100), not human readers with and without AI assistance.
- Effect Size: Not applicable, as no such study was performed. The device is intended to flag specimens for human review, not to improve human reader performance directly.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone study of the algorithm's performance was done. The "Clinical Performance Data" section describes "Correlation studies were performed to evaluate the equivalency of the UF-50 performance compared to the predicate device, the UF-100." This indicates that the UF-50's performance was evaluated independently, comparing its output directly to the predicate device's output. The device itself is described as a "fully automated urine cell analyzer" performing "in vitro diagnostic use," implying standalone operation to generate results that are then compared.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the correlation studies appears to be the results obtained from the predicate device, the Sysmex UF-100. The study "evaluate[d] the equivalency of the UF-50 performance compared to the predicate device, the UF-100." Therefore, the UF-100's performance served as the reference for determining the UF-50's equivalency.
8. The sample size for the training set
The document does not describe a "training set" in the context of machine learning. The UF-50 is described as using "flow cytometry technology," which is a well-established analytical method, not typically associated with machine learning training sets in the same way an AI algorithm might be.
9. How the ground truth for the training set was established
Not applicable, as no machine learning training set is described.
Ask a specific question about this device
Page 1 of 1