(224 days)
CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use in clinical laboratories.
CellaVision® DC-1 is intended to be used by operators, trained in the use of the device.
Intended use of the Peripheral Blood Application
The Peripheral Blood Application is intended for differential count of white blood cells (WBC), characterization of red blood cell (RBC) morphology and platelet estimation.
The CellaVision® DC-1 with the Peripheral Blood Application automatically locates blood cells on peripheral blood (PB) smears. The application presents images of the blood cells for review. A skilled operator trained in recognition of blood cells, identifies and verifies the suggested classification of each cell according to type.
The CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use. CellaVision® DC-1 automatically locates and presents images of blood cells found on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type. CellaVision® DC-1 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells
The CellaVision® DC-1 consists of a built-in PC with a Solid-State Disc (SSD) containing CellaVision DM Software (CDMS), a high-power magnification microscope with a LED illumination, an XY stage, a proprietary camera with firmware, built-in motor- and illumination LED controller, a casing and an external power supply. It is capable of handling one slide at a time.
Here's a detailed breakdown of the acceptance criteria and study information for the CellaVision® DC-1 device, based on the provided text:
Acceptance Criteria and Device Performance
Parameter | Acceptance Criteria | Reported Device Performance |
---|---|---|
WBC Repeatability | Proportional cell count in percent for each cell class. All tests except repeatability and within laboratory precision on three occasions met acceptance criteria. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. Three samples displayed variation in mean values between slide 1 and 2 for specific cell types, resulting in a variation slightly above acceptance criteria. |
RBC Repeatability | Agreement (percentage of runs reporting the grade) for each morphology. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. One sample displayed variation in mean value between slide 1 and 2 for the specific cell type, resulting in a variation above acceptance criteria. |
PLT Repeatability | Agreement (percentage of runs reporting the actual PLT level) for each PLT level. | Met the acceptance criterion in all samples at all three sites. |
WBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on ANOVA on proportional cell count for each class. | Overall successful. |
RBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on agreement (percentage of runs reporting the grade). | Overall successful. |
PLT Reproducibility | Agreement (percentage of runs reporting the most prevalent level) for each PLT level. | Met the acceptance criterion in all samples. Overall successful. |
WBC Comparison (Accuracy) | Linear regression slope: 0.8-1.2. Intercept: ±0.2 for individual cell types with a normal level of >5%. | Accuracy evaluations successful in all studies, showing no systematic difference between DC-1 and DM1200. Agreement for Segmented Neutrophils, Lymphocytes, Eosinophils, and Monocytes were all within acceptance criteria. |
WBC Comparison (Distribution & Morphology) | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate. | Distribution: OA 89.8%, PPA 89.2%, NPA 90.4%. |
Morphology: OA 91.3%, PPA 88.6%, NPA 92.5%. | ||
RBC Comparison | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate with 95% CI. | Color: OA 79.9% (76.4%-82.9%), PPA 87.8% (82.3%-91.8%), NPA 76.3% (71.9%-80.2%). |
Size: OA 88.2% (85.3%-90.6%), PPA 89.8% (86.3%-92.2%), NPA 84.8% (79.0%-89.2%). | ||
Shape: OA 85.2% (82.0%-87.8%), PPA 87.3% (82.3%-91.0%), NPA 83.8% (79.6%-87.3%). | ||
PLT Comparison | Cohen's Kappa coefficient ≥ 0.6. | Weighted Kappa 0.89405 (95% CI: 0.87062 to 0.91748), meeting the acceptance criterion. |
EMC Testing | Conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. | The test showed that the DC-1 is in conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. |
Software V&V | Software verification and validation testing documentation provided as recommended by FDA. | Documentation was provided as recommended by FDA's Guidance for Industry and Staff. Software classified as "moderate" level of concern. |
Study Details
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- WBC Comparison: 598 samples (1196 slides, A and B slide).
- RBC Comparison: 586 samples.
- PLT Comparison: 598 samples.
- Repeatability and Reproducibility studies:
- WBC & RBC Repeatability: Five samples at each of three sites.
- PLT Repeatability: Four samples at each of three sites.
- WBC Reproducibility: Five samples, five slides prepared from each, processed daily at three sites for five days.
- RBC & PLT Reproducibility: Four samples, five slides prepared from each, processed daily at three sites for five days.
- Data Provenance: The studies were conducted in a clinical setting using three different laboratories. The specific country of origin is not mentioned, but the context implies clinical labs. The studies appear to be prospective as samples were processed for the studies.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- The document states that a "skilled operator trained in recognition of blood cells" identifies and verifies the suggested classification. For the comparison studies, the predicate device (DM1200) results served as the reference for comparison. While the specific number or qualifications of experts adjudicating the DM1200's results (which serve as the reference standard) are not explicitly stated, the context implies that a medically trained professional would be involved in generating and validating those results. The study design for WBC and RBC comparisons references CLSI H20-A2, which typically outlines protocols for comparison of manual differential leukocyte counts, implying that the "ground truth" (or reference standard) would have been established by trained medical technologists or pathologists according to accepted procedures.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document describes a comparison study between the CellaVision® DC-1 (test device) and the CellaVision® DM1200 (reference device, K092868). The results from a "skilled operator" using the predicate device were used as the reference data. Therefore, the adjudication method for the test set ground truth (if DM1200's results are considered the ground truth equivalent) is essentially the standard workflow of the predicate device with human operator verification. There is no explicit mention of an adjudication panel for discrepancies between human observers or between the device and a human. The DC-1's proposed classifications are compared against the verified classifications from the DM1200.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study evaluating human readers with AI vs without AI assistance was presented. The study focuses on comparing the new device (DC-1, which uses CNN for pre-classification) against a predicate device (DM1200, which uses ANN for pre-classification), both of which employ AI in a human-in-the-loop workflow. Both systems are "automated cell-locating devices" where a "skilled operator" reviews and verifies the pre-classified cells. The study does not quantify the improvement of a human reader with either AI system compared to a human reader performing a completely manual differential count without any AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone (algorithm only) performance study was not done or reported as part of the submission. The device is explicitly intended for "in-vitro diagnostic use in clinical laboratories" where a "skilled operator...identifies and verifies the suggested classification of each cell according to type." The device "pre-classifies" WBCs and "pre-characterizes" RBCs, but the operator's verification is integral to the intended use. The performance data for repeatability and comparability were evaluated on the "preclassified results suggested by the device" (Repeatability) or "verified WBC/RBC results from the DC-1" (Comparison), indicating the involvement of a human in the final assessment for the comparison studies.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the comparison studies was established by the verified results obtained from the predicate device (CellaVision® DM1200) operated by a skilled professional. This is implied to be equivalent to an expert consensus based on established laboratory practices, as the DM1200 is also a device requiring human verification. For repeatability and reproducibility studies, "preclassified results suggested by the device" or "verified data" were used, again implying that the 'ground truth' for evaluating the DC-1's consistency relies on its own outputs, or validated outputs, rather than an independent expert consensus on raw slides.
-
The sample size for the training set:
- The document does not specify the sample size used for training the Convolutional Neural Networks (CNN) for the CellaVision® DC-1. It only mentions that CNNs are used for preclassification and precharacterization.
-
How the ground truth for the training set was established:
- The document does not specify how the ground truth for the training set was established. It only states that the CNNs are "trained to distinguish between classes of white blood cells."
§ 864.5260 Automated cell-locating device.
(a)
Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and classify each cell according to type. (Peripheral blood is blood circulating in one of the body's extremities, such as the arm.)(b)
Classification. Class II (performance standards).