Search Results
Found 6 results
510(k) Data Aggregation
(224 days)
CellaVision AB
CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use in clinical laboratories.
CellaVision® DC-1 is intended to be used by operators, trained in the use of the device.
Intended use of the Peripheral Blood Application
The Peripheral Blood Application is intended for differential count of white blood cells (WBC), characterization of red blood cell (RBC) morphology and platelet estimation.
The CellaVision® DC-1 with the Peripheral Blood Application automatically locates blood cells on peripheral blood (PB) smears. The application presents images of the blood cells for review. A skilled operator trained in recognition of blood cells, identifies and verifies the suggested classification of each cell according to type.
The CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use. CellaVision® DC-1 automatically locates and presents images of blood cells found on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type. CellaVision® DC-1 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells
The CellaVision® DC-1 consists of a built-in PC with a Solid-State Disc (SSD) containing CellaVision DM Software (CDMS), a high-power magnification microscope with a LED illumination, an XY stage, a proprietary camera with firmware, built-in motor- and illumination LED controller, a casing and an external power supply. It is capable of handling one slide at a time.
Here's a detailed breakdown of the acceptance criteria and study information for the CellaVision® DC-1 device, based on the provided text:
Acceptance Criteria and Device Performance
Parameter | Acceptance Criteria | Reported Device Performance |
---|---|---|
WBC Repeatability | Proportional cell count in percent for each cell class. All tests except repeatability and within laboratory precision on three occasions met acceptance criteria. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. Three samples displayed variation in mean values between slide 1 and 2 for specific cell types, resulting in a variation slightly above acceptance criteria. |
RBC Repeatability | Agreement (percentage of runs reporting the grade) for each morphology. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. One sample displayed variation in mean value between slide 1 and 2 for the specific cell type, resulting in a variation above acceptance criteria. |
PLT Repeatability | Agreement (percentage of runs reporting the actual PLT level) for each PLT level. | Met the acceptance criterion in all samples at all three sites. |
WBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on ANOVA on proportional cell count for each class. | Overall successful. |
RBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on agreement (percentage of runs reporting the grade). | Overall successful. |
PLT Reproducibility | Agreement (percentage of runs reporting the most prevalent level) for each PLT level. | Met the acceptance criterion in all samples. Overall successful. |
WBC Comparison (Accuracy) | Linear regression slope: 0.8-1.2. Intercept: ±0.2 for individual cell types with a normal level of >5%. | Accuracy evaluations successful in all studies, showing no systematic difference between DC-1 and DM1200. Agreement for Segmented Neutrophils, Lymphocytes, Eosinophils, and Monocytes were all within acceptance criteria. |
WBC Comparison (Distribution & Morphology) | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate. | Distribution: OA 89.8%, PPA 89.2%, NPA 90.4%. |
Morphology: OA 91.3%, PPA 88.6%, NPA 92.5%. | ||
RBC Comparison | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate with 95% CI. | Color: OA 79.9% (76.4%-82.9%), PPA 87.8% (82.3%-91.8%), NPA 76.3% (71.9%-80.2%). |
Size: OA 88.2% (85.3%-90.6%), PPA 89.8% (86.3%-92.2%), NPA 84.8% (79.0%-89.2%). | ||
Shape: OA 85.2% (82.0%-87.8%), PPA 87.3% (82.3%-91.0%), NPA 83.8% (79.6%-87.3%). | ||
PLT Comparison | Cohen's Kappa coefficient ≥ 0.6. | Weighted Kappa 0.89405 (95% CI: 0.87062 to 0.91748), meeting the acceptance criterion. |
EMC Testing | Conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. | The test showed that the DC-1 is in conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. |
Software V&V | Software verification and validation testing documentation provided as recommended by FDA. | Documentation was provided as recommended by FDA's Guidance for Industry and Staff. Software classified as "moderate" level of concern. |
Study Details
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- WBC Comparison: 598 samples (1196 slides, A and B slide).
- RBC Comparison: 586 samples.
- PLT Comparison: 598 samples.
- Repeatability and Reproducibility studies:
- WBC & RBC Repeatability: Five samples at each of three sites.
- PLT Repeatability: Four samples at each of three sites.
- WBC Reproducibility: Five samples, five slides prepared from each, processed daily at three sites for five days.
- RBC & PLT Reproducibility: Four samples, five slides prepared from each, processed daily at three sites for five days.
- Data Provenance: The studies were conducted in a clinical setting using three different laboratories. The specific country of origin is not mentioned, but the context implies clinical labs. The studies appear to be prospective as samples were processed for the studies.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- The document states that a "skilled operator trained in recognition of blood cells" identifies and verifies the suggested classification. For the comparison studies, the predicate device (DM1200) results served as the reference for comparison. While the specific number or qualifications of experts adjudicating the DM1200's results (which serve as the reference standard) are not explicitly stated, the context implies that a medically trained professional would be involved in generating and validating those results. The study design for WBC and RBC comparisons references CLSI H20-A2, which typically outlines protocols for comparison of manual differential leukocyte counts, implying that the "ground truth" (or reference standard) would have been established by trained medical technologists or pathologists according to accepted procedures.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document describes a comparison study between the CellaVision® DC-1 (test device) and the CellaVision® DM1200 (reference device, K092868). The results from a "skilled operator" using the predicate device were used as the reference data. Therefore, the adjudication method for the test set ground truth (if DM1200's results are considered the ground truth equivalent) is essentially the standard workflow of the predicate device with human operator verification. There is no explicit mention of an adjudication panel for discrepancies between human observers or between the device and a human. The DC-1's proposed classifications are compared against the verified classifications from the DM1200.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study evaluating human readers with AI vs without AI assistance was presented. The study focuses on comparing the new device (DC-1, which uses CNN for pre-classification) against a predicate device (DM1200, which uses ANN for pre-classification), both of which employ AI in a human-in-the-loop workflow. Both systems are "automated cell-locating devices" where a "skilled operator" reviews and verifies the pre-classified cells. The study does not quantify the improvement of a human reader with either AI system compared to a human reader performing a completely manual differential count without any AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone (algorithm only) performance study was not done or reported as part of the submission. The device is explicitly intended for "in-vitro diagnostic use in clinical laboratories" where a "skilled operator...identifies and verifies the suggested classification of each cell according to type." The device "pre-classifies" WBCs and "pre-characterizes" RBCs, but the operator's verification is integral to the intended use. The performance data for repeatability and comparability were evaluated on the "preclassified results suggested by the device" (Repeatability) or "verified WBC/RBC results from the DC-1" (Comparison), indicating the involvement of a human in the final assessment for the comparison studies.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the comparison studies was established by the verified results obtained from the predicate device (CellaVision® DM1200) operated by a skilled professional. This is implied to be equivalent to an expert consensus based on established laboratory practices, as the DM1200 is also a device requiring human verification. For repeatability and reproducibility studies, "preclassified results suggested by the device" or "verified data" were used, again implying that the 'ground truth' for evaluating the DC-1's consistency relies on its own outputs, or validated outputs, rather than an independent expert consensus on raw slides.
-
The sample size for the training set:
- The document does not specify the sample size used for training the Convolutional Neural Networks (CNN) for the CellaVision® DC-1. It only mentions that CNNs are used for preclassification and precharacterization.
-
How the ground truth for the training set was established:
- The document does not specify how the ground truth for the training set was established. It only states that the CNNs are "trained to distinguish between classes of white blood cells."
Ask a specific question about this device
(357 days)
CELLAVISION AB
DM1200 is an automated system intended for in-vitro diagnostic use.
The body fluid application is intended for differential count of white blood cells. The system automatically locates and presents images of cells on cytocentrifuged body fluid preparations. The operator identifies and verifies the suggested classification of each cell according to type.
DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
CellaVision DM1200 with the body fluid application automatically locates and presents images of nucleated cells on cytocentrifuged body fluid preparations. The system suggests a classification for each cell and the operator verifies the classification and has the opportunity to change the suggested classification of any cell.
The system preclassifies to the following WBC classes: Unidentified, Neutrophils, Eosinophils, Lymphocytes, Macrophages (including Monocytes) and Other. Cells preclassified as Basophils, Lymphoma cells, Atypical lymphocytes, Blasts and Tumor cells are automatically forwarded to the cell class Other.
Unidentified is a class for cells and objects which the system has pre-classified with a low confidence level.
Here's an analysis of the provided text, outlining the acceptance criteria and study details for the CellaVision DM1200 with the body fluid application:
Acceptance Criteria and Device Performance Study for CellaVision DM1200 with Body Fluid Application
The CellaVision DM1200 with body fluid application is an automated cell-locating device intended for in-vitro diagnostic use, specifically for the differential count of white blood cells in cytocentrifuged body fluid preparations. The system automatically locates and presents cell images, suggests a classification, and requires operator verification.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state pre-defined "acceptance criteria" as pass/fail thresholds for accuracy or precision. Instead, it presents the results of a method comparison study between the CellaVision DM1200 (Test Method) and its predicate device, CellaVision DM96 (Reference Method), for various leukocyte classifications. The implied acceptance criteria are that the DM1200 demonstrates comparable accuracy and precision to the predicate device.
Cell Class | Acceptance Criteria (Implied: Comparable to Predicate) | Reported Device Performance (CellaVision DM1200 vs. DM96) |
---|---|---|
Accuracy (Regression Analysis: DM1200 = Slope * DM96 + Intercept) | ||
Neutrophils | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9969x + 0.0050, R² = 0.9932 (95% CI Slope: 0.9868-1.0070, 95% CI Intercept: 0.0004-0.0096) |
Lymphocytes | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9815x + 0.0016, R² = 0.9829 (95% CI Slope: 0.9656-0.9973, 95% CI Intercept: -0.0049-0.0081) |
Eosinophils | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 1.1048x - 0.0002, R² = 0.9629 (95% CI Slope: 1.0782-1.1314, 95% CI Intercept: -0.0007-0.0003) |
Macrophages | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 1.0067x - 0.0050, R² = 0.9823 (95% CI Slope: 0.9901-1.0232, 95% CI Intercept: -0.0125-0.0024) |
Other cells | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9534 + 0.0032, R² = 0.9273 (95% CI Slope: 0.9207-0.9861, 95% CI Intercept: -0.0002-0.0065) |
Precision/Reproducibility (Short-term Imprecision) | ||
Neutrophils | SD % comparable between test and reference method | Test Method: Mean % 32.0, SD % 3.2; Reference Method: Mean % 31.6, SD % 3.4 |
Lymphocytes | SD % comparable between test and reference method | Test Method: Mean % 30.1, SD % 5.6; Reference Method: Mean % 30.5, SD % 5.7 |
Eosinophils | SD % comparable between test and reference method | Test Method: Mean % 0.6, SD % 0.7; Reference Method: Mean % 0.5, SD % 0.6 |
Macrophages | SD % comparable between test and reference method | Test Method: Mean % 35.3, SD % 5.8; Reference Method: Mean % 35.5, SD % 6.2 |
Other cells | SD % comparable between test and reference method | Test Method: Mean % 2.1, SD % 1.7; Reference Method: Mean % 1.9, SD % 2.5 |
The conclusion states that the short-term imprecision was found to be equivalent for the test method and the reference method, and the accuracy results (high R-squared values, slopes close to 1, and intercepts close to 0) demonstrate substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 260 samples.
- CSF: 62 samples
- Serous fluid: 151 samples
- Synovial fluid: 47 samples
- Data Provenance:
- Country of Origin: Not explicitly stated, but samples were collected from "two sites." Given the submitter is in Sweden, and the regulatory contact is in the USA, it's unclear if these sites were in Sweden, the USA, or elsewhere.
- Retrospective or Prospective: Not explicitly stated, but the description "collected from two sites" and then analyzed suggests a prospective collection or at least fresh samples for the study.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated for the establishment of ground truth for the test set's initial classifications. However, the document mentions:
- "The results were then verified by skilled human operators." This indicates human review post-analysis by both the test and reference methods.
- The "Intended Use" section states: "DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells."
- Qualifications of Experts: "Skilled human operators, trained in the use of the device and in recognition of blood cells." No specific professional qualifications (e.g., "radiologist with 10 years of experience") are provided.
4. Adjudication Method for the Test Set
The document states: "The results were then verified by skilled human operators." It does not specify a multi-reader adjudication method like 2+1 or 3+1. It implies a single operator verification for each result generated by both the test and reference methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance was not explicitly described. This study was a method comparison between two devices (one with the new application, one being the predicate) with human verification. The device's function is to suggest classifications, which implies an assistive role, but the study design was not an MRMC study comparing human performance with and without AI.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance was implicitly done. The "Test Method" (CellaVision DM1200) "suggests a classification for each cell," meaning the algorithm performs an initial classification without human intervention. The reported accuracy metrics ($R^2$, slope, intercept) compare these suggested classifications to those obtained by the reference method (DM96, which also involves algorithmic preclassification). However, the study concludes with human verification of these results, suggesting the standalone performance without the "human-in-the-loop" step described in the intended use is not the final reported performance. The "Accuracy results" table (Table 3.3) and "Precision/Reproducibility" table (Table 3.4) reflect the device's performance before the final human verification step that might change classifications.
7. Type of Ground Truth Used
The ground truth used for the comparison was established by the predicate device (CellaVision DM96) with its own human verification, after undergoing "a 200-cell differential count... with both the test method and the reference method. The results were then verified by skilled human operators." Therefore, it's a form of expert-verified reference measurement. It is not pathology, or outcomes data.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set used to develop the CellaVision DM1200's classification algorithms. It mentions "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but no details about the training data are provided within this summary.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly describe how the ground truth for the training set was established. It states that the ANNs were "trained to distinguish between classes of white blood cells," implying that a labeled dataset was used for training, but the process of creating these labels (e.g., expert consensus, manual review) is not detailed in this 510(k) summary.
Ask a specific question about this device
(63 days)
CELLAVISION AB
DM1200 is an automated cell-locating device.
DM1200 automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.
DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
DM1200 is an automated cell-locating device for differential count of white blood cells, characterization of red blood cell morphology and platelet estimation. DM1200 consists of a slide scanning unit (a robot gripper, a microscope and a camera) and a computer system containing the acquisition and classification software "CellaVision® DM software".
Here's an analysis of the acceptance criteria and study details for the CellaVision® DM1200 Automated Hematology Analyzer, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary states that "The results fulfilled the pre-defined requirements" for accuracy for cell-location, accuracy for verified classification for each cell class, precision for verified classification for each cell class, and clinical sensitivity and specificity. However, the specific quantitative acceptance criteria or the reported performance metrics (e.g., specific accuracy percentages, precision values, sensitivity, and specificity thresholds) are not detailed in the provided document. The summary only generally claims that the results fulfilled these requirements.
To illustrate what such a table would look like if the data were present, here's a hypothetical structure:
Hypothetical Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Hypothetical) | Reported Device Performance (Hypothetical) |
---|---|---|
Accuracy for cell-location | > 95% | > 98% |
Accuracy for verified classification | > 90% for all major cell classes | > 92% for all major cell classes |
Precision for verified classification | CV 90% | > 93% |
Clinical Specificity (overall WBC diff) | > 85% | > 88% |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The summary mentions "clinical tests" and "a pre-defined protocol for differentiation of the approved standard CLSI, H20-A2, Reference Leukocyte (WBC) Differential Count (Proportional) and Evaluation of Instrumental Methods." While this indicates a formal study, the specific number of blood smears or cells included in the test set is not mentioned.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data
- Retrospective or Prospective: Not explicitly stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: The study was performed according to CLSI H20-A2, which typically involves experienced morphologists. However, the specific qualifications (e.g., "radiologist with 10 years of experience") are not provided in the summary. The device's intended use also states it needs to be used by "skilled operators, trained in the use of the device and in recognition of blood cells," implying that the experts establishing ground truth would be highly skilled in blood cell morphology.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The CLSI H20-A2 standard, which the study adhered to, typically uses an agreement process among multiple experts to establish a reference differential, which could involve consensus or adjudication. However, the exact method (e.g., 2+1, 3+1, none) is not detailed in this summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: The summary discusses "comparison to the predicate device DM96 for differentiation of white blood cells" and "to confirm equivalence with the predicate device." However, it focuses on the device's performance against the predicate's results and adherence to standards, rather than a comparative effectiveness study showing human readers' improvement with AI vs. without AI assistance. The DM1200 is described as an "automated cell-locating device" where "The operator identifies and verifies the suggested classification of each cell." This indicates a human-in-the-loop system, but the study described does not quantify the improvement of human readers using the AI versus without it.
- Effect Size of Human Reader Improvement: Not applicable, as this specific type of MRMC study was not described.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Standalone Performance: No, a standalone performance study (algorithm only) was not explicitly described for the DM1200. The device's intended use clearly states: "The operator identifies and verifies the suggested classification of each cell according to type." This indicates that the device is designed to be used with a human reviewer in the loop, validating the AI's suggestions. The "accuracy for the verified classification" metric further supports this, suggesting the final classification is human-verified.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus with adherence to the CLSI H20-A2, Reference Leukocyte (WBC) Differential Count (Proportional) and Evaluation of Instrumental Methods; 2nd Ed. standard. This standard provides guidelines for establishing reference differentials, which typically rely on experienced morphologists. The summary states: "performed according to a pre-defined protocol for differentiation of the approved standard CLSI, H20-A2."
8. Sample Size for the Training Set
- Sample Size for Training Set: Not mentioned. The summary focuses on the clinical tests for demonstrating equivalence, not on the details of the model's development or training data.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not mentioned. Similar to the training set size, the details of how the training data's ground truth was established are not provided in the 510(k) summary.
Ask a specific question about this device
(277 days)
CELLAVISION AB
DM96 is an automated cell-locating device.
The body fluid application is intended for differential count of white blood cells. The system automatically locates and presents images of cells on cytocentrifuged body fluid preparations. The operator identifies and verifies the suggested classification of each cell according to type.
DM96 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
The CellaVision DM96 with the body fluid application is a laboratory instrument used to perform differential analysis by locating, digitally storing and displaying cells in human body fluid preparations.
The CellaVision DM96 with the body fluid application is a new intended use that follows the same process as the currently cleared DM96 with white blood cell differential, RBC characterization and platelet estimation (K033840).
Here's a breakdown of the acceptance criteria and the study information for the CellaVision DM96 with the body fluid application, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly state quantitative "acceptance criteria" or specific "reported device performance" metrics (like accuracy percentages, sensitivity, specificity) for the CellaVision DM96 with the body fluid application. Instead, it focuses on demonstrating substantial equivalence to existing devices and validating its performance through various tests.
However, based on the context, the implied acceptance criteria were likely:
- Accuracy: The device's ability to correctly classify white blood cells in body fluid preparations when compared to manual microscopic methods.
- Precision: The consistency and reproducibility of the device's cell classification.
- Cell Location: The device's ability to effectively locate white blood cells for an operator to review.
- Substantial Equivalence: The device performing as well as or better than the predicate devices and the manual method without raising new safety or effectiveness concerns.
Acceptance Criterion (Implied) | Reported Device Performance (Summary) |
---|---|
Accuracy | "Tests on cytocentrifuged body fluid preparations... were conducted and successfully completed... Tests were also conducted to validate performance including accuracy..." |
Precision | "...and precision..." (from the summary of testing) |
Cell Location | "...and cell-location." (from the summary of testing) |
Substantial Equivalence | "Based on extensive performance testing including comparison to the predicate devices, it is the conclusion of CellaVision AB that DM96 with the body fluid application is substantially equivalent to devices already on the market..." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The text states, "Tests on cytocentrifuged body fluid preparations including specimen types such as cerebrospinal fluid, serous fluid and related fluids were conducted and successfully completed." However, the exact numerical sample size for the test set is not specified.
- Data Provenance: The text does not explicitly state the country of origin or whether the data was retrospective or prospective. It just mentions "Tests on cytocentrifuged body fluid preparations."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: The text does not specify the number of experts used to establish ground truth or details about their involvement in the "accuracy" testing.
- Qualifications of Experts: The intended use section states, "DM96 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells." This implies that the comparison or ground truth would be based on the assessment of such "skilled operators" or experts, but their specific qualifications (e.g., years of experience, certifications) are not detailed within the provided snippets. The comparison table mentions "The examiners usually locate/count white blood cells..." which suggests manual review by trained personnel.
4. Adjudication Method for the Test Set
- The text does not specify an explicit adjudication method (e.g., 2+1, 3+1). It only mentions that "the operator identifies and verifies the suggested classification of each cell according to type" which implies a human-in-the-loop review process as the final step.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
- MRMC Study: The provided text does not explicitly mention a "Multi Reader Multi Case (MRMC) comparative effectiveness study." The focus is on demonstrating substantial equivalence to manual methods rather than quantifying the improvement of human readers with AI assistance versus without AI assistance.
- Effect Size: Therefore, no effect size for human reader improvement with AI assistance is reported. The device's function is to "pre-classify" cells, which are then "verified" by an operator, implying assistance rather than a direct comparison of assisted vs. unassisted reader performance metrics.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Standalone Performance: The core function of the CellaVision DM96 is described as "The cell images are pre-classified and the operator verifies the suggested classification by accepting or reclassifying." This clearly indicates that the device operates with a human-in-the-loop. Therefore, a standalone (algorithm only) performance study and its results are not provided or implied by this documentation, as the device is not intended for standalone use for final diagnosis. The device's regulatory pathway is likely as an aid to the human operator.
7. The Type of Ground Truth Used
- The ground truth appears to be expert consensus or expert review (manual light microscopic process). The comparison table repeatedly references the "Manual light microscopic process" as the standard against which the device's functionality is compared. The device's output is also subject to "verification of results by skilled human operator."
8. The Sample Size for the Training Set
- The text does not provide any information regarding the sample size used for the training set for the artificial neural networks (ANNs).
9. How the Ground Truth for the Training Set Was Established
- The text states that the device uses "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells." However, it does not elaborate on how the ground truth for this training data was established. Based on the overall context, it would logically be established by expert morphologists/pathologists classifying cells.
Ask a specific question about this device
(62 days)
CELLAVISION AB
DM96 is an automated cell-locating device.
DM96 automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.
DM96 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
DM96 is an automated cell-locating device for differential count of white blood cells, characterization of red blood cell morphology and platelet estimation. DM96 consists of a slidescanning unit (a slide feeder, a microscope and a camera) and a computer system containing the acquisition and classification software "CellaVision Blood Differential software".
Here's an analysis of the provided text, extracting the requested information about device acceptance criteria and the supporting study:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria. Instead, it frames the performance in terms of "equivalence" to the predicate device (DiffMaster Octavia™) and the manual light microscopic process. The reported performance is that the DM96 "is equivalent in accuracy, precision and clinical sensitivity and specificity and fulfilled the pre-defined requirements for overview image quality."
Characteristic | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Accuracy (White Blood Cell Classification) | Equivalent to DiffMaster Octavia™ and manual light microscopic process | Achieved equivalence to predicate and manual method |
Precision (White Blood Cell Classification) | Equivalent to DiffMaster Octavia™ and manual light microscopic process | Achieved equivalence to predicate and manual method |
Clinical Sensitivity (White Blood Cell Classification) | Equivalent to DiffMaster Octavia™ and manual light microscopic process | Achieved equivalence to predicate and manual method |
Clinical Specificity (White Blood Cell Classification) | Equivalent to DiffMaster Octavia™ and manual light microscopic process | Achieved equivalence to predicate and manual method |
Cell-Location Accuracy | Met pre-defined requirements | Achieved accuracy for cell-location |
Overview Image Quality | Met pre-defined requirements | Fulfilled pre-defined requirements for overview image quality |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the sample size used for the clinical evaluation (test set) or the data provenance (e.g., country of origin, retrospective/prospective). It only mentions that "A clinical evaluation has been performed to confirm equivalence with the predicate method DiffMaster Octavia™ for differentiation of white blood cells."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not provided in the document.
4. Adjudication Method for the Test Set
The document states that for both the predicate device and the DM96, "A competent operator is required to confirm or modify the suggested classification of each cell." This indicates a human-in-the-loop approach where human verification or reclassification is part of the final result. However, the specific adjudication method used for the initial ground truth (e.g., 2+1, 3+1 consensus) for the test set is not explicitly described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
The document describes a clinical evaluation to confirm equivalence with a predicate method, which is a comparative study. It also explicitly states that the device is intended for use by "skilled operators, trained in the use of the device and in recognition of blood cells," and that "The operator identifies and verifies the suggested classification of each cell according to type." This implies a human-in-the-loop scenario.
However, it does not explicitly state that it was a formal MRMC study, nor does it provide an effect size of how much human readers improve with AI vs. without AI assistance. The study focuses on demonstrating equivalence of the DM96 (with human verification) to existing methods.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The device description consistently states that the "operator identifies and verifies the suggested classification of each cell." The technological characteristics section further elaborates that "The cell images are pre-classified and the operator verifies the suggested classification by accepting or reclassifying." This indicates that the device is not intended or tested for standalone, algorithm-only performance without a human-in-the-loop to verify classifications.
7. The Type of Ground Truth Used
The ground truth for the clinical evaluation appears to be based on expert consensus from a "competent human examiner" or "skilled operators" using either the manual light microscopic process or the predicate device (DiffMaster Octavia™) for comparison. The study aimed to show equivalence to these established methods, which rely on human expert interpretation.
8. Sample Size for the Training Set
The document provides no information regarding the sample size used for the training set of the deterministic artificial neural networks (ANNs).
9. How the Ground Truth for the Training Set Was Established
The document states that ANNs are "trained to distinguish between classes of white blood cells." However, it does not describe how the ground truth for this training data was established.
Ask a specific question about this device
(146 days)
CELLAVISION AB
The DiffMaster Octavia™ is an automated cell locating device. DiffMaster Octavia automatically locates and presents images of blood cells on peripheral blood specimens. The operator identifies and verifies the suggested classification of each cell according to type. DiffMaster Octavia is intended to be used by skilled operators, trained in the use of the instrument and in recognition of leukocyte classes.
The DiffMaster-Octavia™ is an automated cell locating device for differential count of white blood cells and characterization of red morphology. It is based on a CellaVision AB developed software system Cytologica. DiffMaster Octavia™consists of a commercially available positioning system for the slides, a commercially available microscope, a commercially available camera and the software system.
Here's a breakdown of the acceptance criteria and study information for the DiffMaster Octavia™ Automatic Hematology Analyzer, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not explicitly list acceptance criteria in a table format with specific thresholds. Instead, it states that the device demonstrated equivalence to the predicate method. The "results equal to the reference method" is the primary performance claim used for substantial equivalence.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Overall Performance | Results equal to the reference method (Romanowski (MGG)-Stain manual light microscopic process for cell classification) for: |
– Accuracy of suggested classification | The accuracy of the suggested classification for each cell type (DiffMaster Octavia™ results compared to the light microscope manual diff count results) was demonstrated to be equivalent to the reference method. |
– Precision (location & display) | The precision for location and display of the cells found was demonstrated to be equivalent to the reference method. |
– Precision (reproducibility) | The precision of the instrument (reproducibility) was demonstrated to be equivalent to the reference method. |
– Sensitivity | The sensitivity of the instrument (false positive found) was demonstrated to be equivalent to the reference method. |
– Specificity | The specificity of the instrument (false negatives found) was demonstrated to be equivalent to the reference method. |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: The document does not specify the exact sample size (number of cases or cells) used in the clinical trials. It merely states "Two clinical trials have been performed."
- Data Provenance: Not explicitly stated. The document indicates the studies "have been performed according to the approved standard, NCCLS, vol. 23, no 1, document H-20A, March 1992: Reference Leukocyte Differential Count (proportional) and Evaluation of Instrumental Method." It does not mention the country of origin or if the data was retrospective or prospective.
3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)
- Number of Experts: Not specified.
- Qualifications of Experts: The ground truth was established by the "manual reference method results," implying experienced laboratory technologists. The description of the intended use states the device is "intended to be used by skilled operators, trained in the use of the instrument and in recognition of leukocyte classes," which suggests the benchmark manual method would also be performed by similarly qualified individuals.
4. Adjudication Method for the Test Set
- The document does not describe a specific adjudication method (e.g., 2+1, 3+1). The "manual reference method" implies a standard, accepted process for manual differential counting, which typically involves a single trained operator's assessment or potentially a consensus if discrepancies arise in real-world lab settings, but this is not detailed in the provided text.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study, as typically understood for AI-assisted workflows, was not explicitly mentioned. The studies compared the device's suggested classifications (which are then verified by a human) against the manual reference method. This is a comparison of the automated system's output (with human verification) against the traditional human-only method, rather than specifically measuring human reader improvement with AI assistance versus without AI assistance. The device's intended use is for human operators to verify/modify the AI's suggestions, so it inherently involves human-in-the-loop.
- Effect Size of Human Improvement with AI vs. without AI: Not reported, as this type of study was not described.
6. Standalone (Algorithm Only) Performance Study
- Not explicitly described as a standalone study in the traditional sense. The "accuracy of the suggested classification for each cell type (DiffMaster Octavia™ results compared to the light microscope manual diff count results)" implies that the algorithm's initial suggestion was compared to the ground truth. However, the device's intended use is always with a human verifying these suggestions. Therefore, while the algorithm's raw suggestion accuracy was likely evaluated internally, the reported clinical performance encompasses the complete human-in-the-loop workflow.
7. Type of Ground Truth Used
- Expert Consensus / Expert Reference Method: The ground truth was established by the "manual reference method results," which means the classifications assigned by skilled human operators using the traditional Romanowski (MGG)-Stain light microscopic process for cell classification.
8. Sample Size for the Training Set
- Not provided. The document states that the software uses "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but does not specify the size of the dataset used for this training.
9. How the Ground Truth for the Training Set Was Established
- Not explicitly stated for the training set ground truth. Similar to the test set, it can be inferred that the training data would have also been labeled based on expert classification (e.g., manual differential counts by skilled technologists) to train the artificial neural networks on correct cell classifications.
Ask a specific question about this device
Page 1 of 1