Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
The COULTER® LH 500 is a quantitative, automated hematology analyzer For In Vitro Diagnostic Use in clinical laboratories. The LH 500 System provides automated complete blood count and leukocyte differential. The product also provides semi- automated reticulocyte analysis.
LH 500 hematology analyzer is designed For In Vitro Diagnostic Use in clinical laboratories. The LH 500 provides automated complete blood count and leukocyte differential and semiautomated reticulocyte analysis. The purpose of the LH 500 hematology analyzer is to separate the normal patient, with all normal system-generated parameters, from the patient who needs additional studies of any of these parameters. These studies might include further measurements of cell size and platelet distribution, manual WBC differential or any other definitive test that helps diagnose the patient's condition.
The provided text describes a 510(k) premarket notification for a software modification to the COULTER® LH 500 Hematology Analyzer, not a study presenting detailed acceptance criteria and performance data in the typical sense for a new AI/ML device. This submission focuses on demonstrating substantial equivalence to a previously cleared device. Therefore, a direct mapping to all requested elements (like sample size for training, number of experts for ground truth, MRMC study, etc.) is not fully possible given the nature of the document.
However, I can extract the relevant information and present what is available, inferring acceptance criteria and performance based on the context of a hematology analyzer.
Acceptance Criteria and Device Performance (Inferred from a Hematology Analyzer Context):
For a hematology analyzer, performance is typically assessed by its ability to accurately count and differentiate various blood cell types. The "acceptance criteria" for a modified version would generally relate to maintaining or improving the performance characteristics of the predicate device, especially for the parameters affected by the software change (e.g., differential counts and reticulocyte analysis).
Given the document is a 510(k) for a software modification to allow cyanide-free reagents and to mitigate observed anomalies of earlier versions, the primary performance assessment would involve demonstrating that the device with the new software and reagents performs comparably to the predicate device with its original reagents and software for key hematological parameters. Specific quantitative acceptance criteria are not explicitly listed in this summary, but would typically involve metrics like:
- Accuracy/Bias: Comparison of counts and differentials to a reference method or predicate device.
- Precision/Reproducibility: Consistency of measurements.
- Correlation: Statistical correlation of results with a reference or predicate device.
- Carryover: Minimal transfer of cells/material between samples.
- Linearity: Accurate measurement across a range of cell concentrations.
Since the document mentions "various corrections, clarifications and minor performance testing results were added to operator labeling," this implies that some performance testing was conducted, likely demonstrating that the modifications did not negatively impact the device's accuracy or reliability for its intended use.
Table of Acceptance Criteria and Reported Device Performance (Inferred/Generic for Hematology Analyzers):
| Acceptance Criteria Category (Inferred) | Specific Metric (Inferred) | Acceptance Threshold (Typical for Hema Analyzer) | Reported Device Performance (Based on "substantially equivalent") |
|---|---|---|---|
| Accuracy of CBC Parameters | White Blood Cell (WBC) Count Bias | Within +/- 5% of reference | Deemed substantially equivalent to predicate |
| Red Blood Cell (RBC) Count Bias | Within +/- 3% of reference | Deemed substantially equivalent to predicate | |
| Hemoglobin (Hgb) Bias | Within +/- 2% of reference | Deemed substantially equivalent to predicate | |
| Platelet (Plt) Count Bias | Within +/- 15% of reference (at low counts) | Deemed substantially equivalent to predicate | |
| Accuracy of Differential Counts | Neutrophil % Bias | Within +/- 5% of reference | Deemed substantially equivalent to predicate |
| Lymphocyte % Bias | Within +/- 5% of reference | Deemed substantially equivalent to predicate | |
| Monocyte % Bias | Within +/- 3% of reference | Deemed substantially equivalent to predicate | |
| Eosinophil % Bias | Within +/- 2% of reference | Deemed substantially equivalent to predicate | |
| Basophil % Bias | Within +/- 1% of reference | Deemed substantially equivalent to predicate | |
| Reticulocyte Analysis | Reticulocyte % Bias | Within +/- 20% of reference (at low counts) | Deemed substantially equivalent to predicate |
| Reproducibility/Precision | %CV for various parameters | Typically < 5% | Deemed substantially equivalent to predicate |
| Functional Equivalence | Ability to use new cyanide-free reagents | Successful and accurate operation | Successful (Implied by clearance) |
Detailed Study Information (Based on the Provided Text):
-
Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated in the provided text. The document mentions "performance testing results were added to operator labeling," which suggests a test set was used, but the size is not specified.
- Data Provenance: Not explicitly stated. For a 510(k) submission, data is typically from internal manufacturer testing or external clinical sites. The country of origin is not mentioned, and it's most likely prospective data collection for the validation of the new software/reagent combination.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. For hematology analyzers, ground truth often involves manual microscopy differential counts performed by trained medical technologists or pathologists, or comparison to established reference methods using samples with known characteristics. The number and qualifications of such experts are not detailed here.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified. If manual microscopy was used to establish ground truth, adjudication (e.g., by a second expert in case of disagreement) would be a standard practice, but the method is not mentioned.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI assistance. This device is an automated hematology analyzer, not an AI-assisted diagnostic tool that aids human interpretation of images. Its purpose is to automate cell counting and differentiation, reducing or replacing the need for manual review for normal samples. The "AI" mentioned in the prompt often refers to advanced machine learning for image interpretation, which is not the primary function or innovation described for this type of hematology analyzer in 2004. The "algorithm changes" mentioned are for internal signal processing and classification, not for assisting human readers in traditional image interpretation.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance assessment was done. The entire operation of a hematology analyzer, including its differential cell counting, is inherently "standalone" in this context. The device directly provides results (CBC and differential) without requiring human real-time interpretation for each case. The "software algorithm changes" and "performance testing results" refer to the standalone performance of the instrument. The device aims to "separate the normal patient... from the patient who needs additional studies," implying that its standalone performance is used to flag samples for further manual human review or specialized testing.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated, but for hematology analyzers, ground truth is typically established through:
- Manual microscopy differential counts by experienced medical technologists or pathologists.
- Reference methods using highly characterized blood samples or certified reference materials.
- Comparison to a predicate device that has already established acceptable accuracy against such reference methods. Given this is a 510(k) for a modification, comparison to the predicate (older software version) would be a primary method, implicitly relying on the predicate's established ground truth.
- Not explicitly stated, but for hematology analyzers, ground truth is typically established through:
-
The sample size for the training set:
- Not applicable / Not specified. The document describes a software modification to an existing device, not the development of a de novo AI/ML model that typically requires a distinct "training set" in the modern sense. While the original software algorithms would have been developed using some form of data, the text does not refer to a training set for this specific Version 2A software update. The "algorithm changes" likely involved refinements based on observed performance with the original reagents or for addressing anomalies, rather than a re-training of a complex AI model.
-
How the ground truth for the training set was established:
- Not applicable / Not specified for the reasons mentioned above. If there were data used to refine the algorithms (rather than a formal "training set"), the ground truth would have been established using similar methods to the test set (manual microscopy, reference methods, or predicate device comparison).
Ask a specific question about this device
Page 1 of 1