Search Results
Found 32 results
510(k) Data Aggregation
(270 days)
JOY
The X100/X100HT with Full Field Peripheral Blood Smear Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in-vitro diagnostic use only. For professional use only.
The X100 / X100HT with Full Field Peripheral Blood Smear (PBS) Application ("Full Field PBS") is a digital cell morphology solution, presenting high resolution digital images of fixed and stained microscopy Peripheral Blood Smears. The Full Field PBS was previously cleared by the Agency on October 2, 2020, throughout the review of K201301 and on May 3, 2022, throughout the review of K220013. The system automatically locates and presents images of peripheral blood cells and streamlines the PBS analysis process with a review workflow composed of four steps: (1) full field review, (2) white blood cells (WBC) review (DSS for WBC Pre-Classification is available), (3) red blood cells (RBC) review and (4) platelet review (DSS for platelet estimation is available).
Under the proposed modification, subject of this 510(k) submission, additional DSS component is added to the RBC and Platelet review steps. Concerning RBC analysis, system-suggested RBC morphological pre-gradings are added as proposed DSS to the user by means of a dotted line around a suggested grading selection box. Notably, the user is still required to review the slide and actively mark the final grading, exactly as performed in the cleared workflow review.
The same approach is used with regard to the update to the platelet review step; A system-suggested platelet clump indication is presented to the user by means of a dotted line around a selection box. Notably, the user is still required to manually mark whether platelet clumps were detected, exactly as currently performed in the cleared workflow review.
The changes under discussion do not affect the cleared indications for use or intended use; the user's workflow of scanning and analyzing peripheral blood smears using the Full Field PBS Application remains otherwise unchanged as well.
The Scopio Labs X100/X100HT with Full Field Peripheral Blood Smear (PBS) Application has received FDA 510(k) clearance (K243144) based on a modification that adds system-suggested pre-gradings for RBC morphology and platelet clump indications. The study involved a method comparison study, repeatability study, reproducibility study, and software verification and validation.
1. Acceptance Criteria and Reported Device Performance
The FDA clearance document does not explicitly state the numerical acceptance criteria for the method comparison study. However, it indicates that "The results met the pre-defined acceptance criteria." The reported performance metrics are presented as overall agreement, Positive Percent Agreement (PPA), and Negative Percent Agreement (NPA).
Category | Overall Agreement | PPA | NPA |
---|---|---|---|
RBC Color | 97.88% (97.29% to 98.42%) | 98.33% (97.48% to 99.10%) | 97.61% (96.81% to 98.33%) |
RBC Inclusions | 97.90% (97.50% to 98.27%) | 86.73% (81.66% to 91.23%) | 98.41% (98.06% to 98.78%) |
RBC Shape | 96.22% (95.92% to 96.50%) | 95.35% (94.50% to 96.12%) | 96.40% (96.06% to 96.71%) |
RBC Size | 95.58% (95.06% to 96.13%) | 99.42% (99.03% to 99.75%) | 92.72% (91.82% to 93.70%) |
PLT Clumping | 87.08% (85.25% to 88.92%) | 86.11% (82.13% to 89.91%) | 87.39% (85.39% to 89.39%) |
The precision studies (repeatability and reproducibility) also "met the pre-defined acceptance criteria." However, the specific numerical criteria for these studies are not detailed in the provided document.
2. Sample Size and Data Provenance for the Test Set
- Sample Size: A total of 1200 anonymized PBS slides were used for the method comparison study.
- Data Provenance: The slides were collected from the laboratory routine workload of three medical centers. The country of origin is not specified, but the submitter information lists "Tel Aviv, Israel" as the sponsor address, which may suggest the data originated from Israel, though this is not explicitly stated for the clinical evaluation. The data is retrospective, as it was collected from "routine workload."
3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)
The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications (e.g., years of experience). It implies that the ground truth was established by referring to "pre-defined acceptance criteria" and a "method comparison study," which typically involves comparison against human expert interpretation. However, the details of this expert interpretation are not provided.
4. Adjudication Method for the Test Set
The adjudication method used to establish the ground truth for the test set is not specified in the provided document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The study described is a method comparison study between the modified device (with added DSS) and either the previous cleared device or human expert consensus, but not a comparative effectiveness study measuring improvement in human reader performance.
6. Standalone Performance Study (Algorithm Only)
The document describes the device as providing "system-suggested RBC morphological pre-gradings" and "system-suggested platelet clump indication" by means of a dotted line, and that "the user is still required to review the slide and actively mark the final grading." This indicates that the device functions as an assistive tool (human-in-the-loop) rather than a completely standalone algorithm making final diagnoses. The performance metrics presented (Overall Agreement, PPA, NPA) are likely comparisons of the system's suggestions against ground truth, which implicitly reflects a standalone component, but the final reported performance is within the context of assisting a qualified technologist, not solely the algorithm's output. Therefore, a purely standalone (algorithm-only without human-in-the-loop) performance is not explicitly presented or claimed for clinical use.
7. Type of Ground Truth Used
Based on the description of the "method comparison study" and the nature of the device (assisting a qualified technologist), the ground truth was likely established by expert consensus or through a reference method performed by qualified experts in hematology/morphology. The context of "laboratory routine workload" and comparison implies an existing gold standard interpretation.
8. Sample Size for the Training Set
The document does not provide information on the sample size used for the training set of the AI/DSS components. It only details the test set used for performance evaluation of the modified device.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established. It states that "Cell images are analysed using standard mathematical methods, including deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells." This implies a training process, but the details of ground truth establishment for that training are omitted.
Ask a specific question about this device
(502 days)
JOY
AI100 with Shonit™ is a cell locating device intended for in-vitro diagnostic use in clinical laboratories.
A1100 with Shonit™ is intended for differential count of White Blood Cells (WBC), characterization of Red Blood Cells (RBC) morphology and Platelet morphology. It automatically locates blood cells on peripheral blood smears and presents images of the blood cells for review.
A skilled operator, trained in the use of the device and in the review of blood cells, identifies each cell according to type.
The AI100 with Shonit™ device consists of a high-resolution microscope with LED illumination, and compute parts such as the motherboard, CPU, RAM, Wi-Fi dongle, SSD containing AI100 with Shonit™ software, motorized XYZ stage, a camera with firmware, PCB and its firmware for driving motor and LED, SMPS, power supply and a casing. It is capable of handling one Peripheral Blood Smear (PBS) slide at a time.
Software plays an intrinsic role in the A1100 with Shonit™ device, and the combination of hardware and software works together for the device to achieve its intended use. The main functions of the software can be summarized as follows:
- Allow the user to set up the device and perform imaging of a PBS slide. ●
- Control the hardware components (Camera, LEDs, Stages, etc) to take images of a . PBS slide.
- Store and manage images and other data corresponding to the PBS slide and present them to the user.
- Analyze images and allows user to identify components in the images and create a ● report for review.
- Allow the user to finalize, download and print a report.
Here's a breakdown of the acceptance criteria and study details for the AI100 with Shonit™ device, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly list a table of "acceptance criteria" alongside "reported device performance." Instead, it presents the results of various studies and states that "All tests met acceptance criteria." We can infer the acceptance criteria from the context of these results.
Here's an inferred table based on the "Method Comparison Study" section, which compares the device to manual microscopy:
Metric | Acceptance Criteria (Implied) | Reported Device Performance (95% CI) |
---|---|---|
WBC Differential (Passing-Bablok Regression) | ||
Neutrophils (%) | Slope CI should include 1, Intercept CI should include 0 | Slope: 1.024 (1.016, 1.032), Intercept: -1.78 (-2.249, -1.346) |
Lymphocytes (%) | Slope CI should include 1, Intercept CI should include 0 | Slope: 1.025 (1.016, 1.034), Intercept: -0.587 (-0.881, -0.306) |
Eosinophils (%) | Slope CI should include 1, Intercept CI should include 0 | Slope: 1.029 (1.012, 1.05), Intercept: -0.039 (-0.07, -0.01) |
Monocytes (%) | Slope CI should include 1, Intercept CI should include 0 | Slope: 1.083 (1.051, 1.117), Intercept: -0.462 (-0.66, -0.304) |
WBC Abnormalities (Sensitivity, Specificity, Overall Agreement) | "Met the acceptance criteria" | |
Morphological Abnormality | N/A | Overall Agreement: 91.7% (90.4%, 92.8%), Sensitivity: 95.3% (92.8%, 96.7%), Specificity: 90.9% (89.4%, 92.2%) |
Distributional Abnormality | N/A | Overall Agreement: 96.4% (95.5%, 97.2%), Sensitivity: 91.0% (86.8%, 93.9%), Specificity: 97.2% (96.3%, 97.9%) |
Overall WBC Abnormality | N/A | Overall Agreement: 95.0% (94.0%, 95.9%), Sensitivity: 92.7% (89.2%, 95.0%), Specificity: 95.4% (94.3%, 96.3%) |
RBC Morphologies (Sensitivity, Specificity, Overall Agreement) | "Met the acceptance criteria" | |
Anisocytosis | N/A | Sensitivity: 91.1% (88.1%, 93.4%), Specificity: 95.9% (94.7%, 96.9%), Overall Agreement: 94.7% (93.6%, 95.7%) |
Macrocytosis | N/A | Sensitivity: 90.7% (87.0%, 93.5%), Specificity: 96.6% (95.5%, 97.4%), Overall Agreement: 95.5% (94.5%, 96.4%) |
Poikilocytosis | N/A | Sensitivity: 96.3% (94.8%, 97.3%), Specificity: 88.1% (85.8%, 90.0%), Overall Agreement: 92.1% (90.7%, 93.2%) |
Platelet Morphologies (Sensitivity, Specificity, Overall Agreement) | "Met the acceptance criteria" | |
Platelets (Overall) | N/A | Sensitivity: 100% (99.8%, 100%), Specificity: 100% (34.2%, 100%), Overall Agreement: 100% (99.8%, 100%) |
Giant Platelets | N/A | Sensitivity: 99.1% (98.4%, 99.5%), Specificity: 92.4% (90.3%, 94.1%), Overall Agreement: 96.4% (95.4%, 97.1%) |
Platelet Clumps | N/A | Sensitivity: 91.6% (89.5%, 93.4%), Specificity: 96.3% (94.9%, 97.3%), Overall Agreement: 94.2% (93.0%, 95.2%) |
Overall Platelets | N/A | Sensitivity: 97.9% (97.1%, 98.4%), Specificity: 94.6% (92.8%, 95.9%), Overall Agreement: 96.8% (96.0%, 97.4%) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Method Comparison Study: A total of 882 samples were collected and analyzed.
- 298 normal samples
- 584 abnormal samples
- Data Provenance:
- Country of Origin: Not explicitly stated, but the submitter is SigTuple Technologies Pvt. Ltd. from Bangalore, Karnataka, India. The regulatory consulting firm is US-based. The clinical study was conducted across four sites, implying a multi-site study.
- Retrospective or Prospective: Not explicitly stated, but the phrasing "samples were collected and analyzed" suggests a prospective collection for the purpose of the study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: "two medical reviewers at each site" for the method comparison study. Since there were four sites, this implies a total of 8 medical reviewers.
- Qualifications of Experts: They were described as "trained qualified reviewers" and "skilled operator, trained in the use of the device and in the review of blood cells." The document specifies that the ground truth review was done by "performing manual microscopy," indicating expertise in manual blood smear analysis.
4. Adjudication Method for the Test Set
- The document states that the "stained slides were read by two medical reviewers at each site both on the AI100 with Shonit™ device and manual microscope (reference method)."
- It does not explicitly describe an adjudication method (e.g., 2+1, 3+1) if the two reviewers disagreed on the ground truth. It seems the comparison was direct, with both reviewers' manual microscopy results forming part of the "reference method."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- A type of MRMC study was performed in the "Method Comparison Study" section, where "two medical reviewers at each site" read the slides. However, this study was primarily designed to compare the device's output (with human verification) to the manual microscopy method (human reference).
- The study design directly compares the device-assisted reading (where the human operator reviews and verifies the AI's suggestions) against the manual microscopy reference method. It is not designed to quantify the effect size of how much human readers improve with AI vs. without AI assistance for the same human reader. The human readers are presented with the AI's suggestions in one arm and perform traditional manual microscopy in the other.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document mentions "pre-classified results suggested by the device" and "pre-characterized results suggested by the device" in the repeatability and reproducibility studies (Sections 5.11 and 5.12). This indicates that the algorithm's raw classifications were evaluated in these analytical performance studies.
- The "principle of operation" also states: "On each FOV image, image processing is applied to extract and classify WBCs, RBCs, and Platelets."
- However, the clinical performance (method comparison study in Section 5.13) which leads to the substantial equivalence determination, is based on a human-in-the-loop workflow: "A skilled operator, trained in the use of the device and in the review of blood cells, identifies and classifies each cell according to type." and "The device then allows the user to review the identified and classified cells... The user may re-classify cells and add impressions as they deem fit and approve the report."
- Therefore, while the algorithm's internal performance was evaluated analytically, the regulatory submission for substantial equivalence focuses on the device's performance as a human-in-the-loop system, not a standalone AI.
7. The Type of Ground Truth Used
- The ground truth used for the clinical performance study (method comparison) was expert consensus / manual microscopy by skilled operators. Specifically, slides were "read by two medical reviewers at each site both on the AI100 with Shonit™ device and manual microscope (reference method)." The "reference method" from manual microscopy served as the ground truth.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set of the AI. The discussions focus on analytical and clinical validation studies for regulatory clearance.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It mentions the use of "neural network of convolutional type" which implies deep learning, but the specifics of its training data and ground truth labeling are not detailed in this regulatory summary.
Ask a specific question about this device
(119 days)
JOY
The X100HT with Full Field Peripheral Blood Smear (PBS) Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in vitro diagnostic use only. For professional use only.
X100HT with Full Field Peripheral Blood Smear (PBS) Application automatically locates and presents high resolution digital images from fixed and stained peripheral blood smears. The user browses through the imaged smear to gain high-level general impressions of the sample. In conducting white blood cells (WBC) differential, the user reviews the X100HT with Full Field PBS suggested classification of each automatically detected WBC and may manually change the suggested classification of any cell. In conducting red blood cells (RBC) morphology evaluation, the user can characterize RBC morphology on observed images. In conducting platelets estimation, the user reviews each automatically detected platelet and the suggested platelets estimation and may manually change the detections or the estimation. The X100HT with Full Field PBS enables efficient slide loading by providing three cassettes, each can be loaded with up to ten peripheral blood smear slides. The slide loader automatically adds mounting media and coverslips to the slides and loads them into the X100 for scanning and analysis. The X100HT with Full Field PBS is intended to be used by skilled users, trained in the use of the device and in the identification of blood cells.
The provided text describes the regulatory clearance of the Scopio X100HT with Full Field Peripheral Blood Smear (PBS) Application, comparing it to a predicate device (X100 with Full Field PBS Application). While it outlines the device's intended use and the general types of testing performed (software, hardware, EMC, safety), it does not contain explicit details on the acceptance criteria or the specific study results that prove the device meets these criteria for the AI/automation components of the system.
The document primarily focuses on demonstrating substantial equivalence to a predicate device, particularly highlighting the addition of a 'Slide Loader' and minor software modifications for workflow efficiency, rather than a detailed performance study of the AI's diagnostic capabilities. The core image analysis and AI components ("standard mathematical methods, including deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells") are stated to be "identical" to the predicate device. Therefore, a comprehensive performance study as requested, particularly regarding the AI's diagnostic accuracy against a ground truth and comparative effectiveness with human readers, is not present in this document.
However, based on the information provided, here's what can be extracted and inferred, with acknowledgments of missing details:
Acceptance Criteria and Device Performance (Inferred/General)
Since the core AI/analysis technique is stated to be "identical" to the predicate device, it's implied that the performance of the X100HT (regarding cell classification accuracy, etc.) would be similar to what was demonstrated for the predicate device's clearance. The document focuses on the new functionality (slide loader) and how it does not raise new questions of safety or effectiveness, meaning the existing performance of the analytical portion is presumed acceptable.
Table 1: Acceptance Criteria and Reported Device Performance
Performance Metric Category | Acceptance Criteria (Inferred from Predicate's Clearance, not explicitly stated for X100HT in this doc) | Reported Device Performance (Inferred, as core AI is identical to predicate) |
---|---|---|
WBC Differential Accuracy | (Not explicitly stated for X100HT; performance equivalent to predicate expected) | Achieves pre-classified WBC categorization using ANNs, to be reviewed by user. |
RBC Morphology Evaluation Presentation | (Not explicitly stated for X100HT) | Presents an overview image for examiner characterization. |
Platelet Estimation Accuracy | (Not explicitly stated for X100HT; performance equivalent to predicate expected) | Automatically locates/counts platelets, provides estimate for user review. |
Functional Equivalence to Predicate | The device's results (images and suggested classifications) are substantively equivalent to the predicate. | Stated to be "identical" analysis technology to K201301 predicate. |
Software Functionality (Slide Loader) | Integration of slide loader enhances workflow without compromising core analysis or safety. | Replaces manual steps of mounting media/coverslipping and slide loading. |
Safety and EMC | Compliance with IEC/EN standards for safety and EMC. | Successfully passed IEC 60601-1-2, FCC Part 15 Subpart B, IEC 61010-2-101, IEC 61010-1, IEC 62471. |
Details on the Study Proving Device Meets Acceptance Criteria:
-
Sample Size Used for the Test Set and Data Provenance:
- Not specified in the provided text. The document states "Verification and validation testing was conducted and documentation was updated," but does not list sample sizes for these tests, nor the origin (country) or nature (retrospective/prospective) of the data. Given the device's classification and the focus on "substantial equivalence," it's possible detailed clinical performance data was not a primary requirement for this 510(k), as the core AI was already cleared.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Not specified in the provided text. The document mentions the device "assists a qualified technologist" and is for "skilled users, trained in the use of the device and in the identification of blood cells," but does not detail the experts used for ground truth generation in any validation studies.
-
Adjudication Method for the Test Set:
- Not specified in the provided text.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Not specified in the provided text. The document emphasizes that the user "reviews the suggested classification" and "may manually change the suggested classification," indicating a human-in-the-loop workflow. However, an MRMC study comparing human performance with and without AI assistance is not described.
-
Standalone (Algorithm Only) Performance:
- A standalone performance study of the algorithm's accuracy in classifying cells (without human review/override) is not explicitly detailed in the provided text for the X100HT. The description of the device's function clearly outlines a "pre-classified" stage where the ANN suggests classifications, which are then reviewed and potentially modified by a human user. The performance reported is thus implicitly a human-in-the-loop performance, but the standalone accuracy is not quantified.
-
Type of Ground Truth Used:
- Not specified in the provided text. Since the device "pre-classifies" cells, the ground truth for training and validating the ANN would likely involve expert consensus or manual expert classification of blood cells. However, this is not explicitly stated.
-
Sample Size for the Training Set:
- Not specified in the provided text. The document mentions "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but the size of the training dataset is not provided.
-
How the Ground Truth for the Training Set Was Established:
- Not specified in the provided text. Similar to point 6, it can be inferred that expert classification was used, but the specific process (e.g., number of experts, consensus methods) is not described.
Summary of Missing Information:
The provided 510(k) summary focuses almost entirely on demonstrating that the X100HT, with its new slide loader, is substantially equivalent to an already cleared predicate device (K201301). It highlights that the core analytical software and imaging technology responsible for AI-assisted cell classification are "identical" to the predicate. Therefore, details regarding new performance studies for the AI component itself (acceptance criteria, test set sizes, ground truth establishment, MRMC studies) are not present in this document, as the performance aspect of the AI was likely covered in the predicate device's clearance. This document serves to demonstrate that the modifications (primarily the slide loader) do not negatively impact the previously established safety and effectiveness.
Ask a specific question about this device
(224 days)
JOY
CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use in clinical laboratories.
CellaVision® DC-1 is intended to be used by operators, trained in the use of the device.
Intended use of the Peripheral Blood Application
The Peripheral Blood Application is intended for differential count of white blood cells (WBC), characterization of red blood cell (RBC) morphology and platelet estimation.
The CellaVision® DC-1 with the Peripheral Blood Application automatically locates blood cells on peripheral blood (PB) smears. The application presents images of the blood cells for review. A skilled operator trained in recognition of blood cells, identifies and verifies the suggested classification of each cell according to type.
The CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use. CellaVision® DC-1 automatically locates and presents images of blood cells found on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type. CellaVision® DC-1 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells
The CellaVision® DC-1 consists of a built-in PC with a Solid-State Disc (SSD) containing CellaVision DM Software (CDMS), a high-power magnification microscope with a LED illumination, an XY stage, a proprietary camera with firmware, built-in motor- and illumination LED controller, a casing and an external power supply. It is capable of handling one slide at a time.
Here's a detailed breakdown of the acceptance criteria and study information for the CellaVision® DC-1 device, based on the provided text:
Acceptance Criteria and Device Performance
Parameter | Acceptance Criteria | Reported Device Performance |
---|---|---|
WBC Repeatability | Proportional cell count in percent for each cell class. All tests except repeatability and within laboratory precision on three occasions met acceptance criteria. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. Three samples displayed variation in mean values between slide 1 and 2 for specific cell types, resulting in a variation slightly above acceptance criteria. |
RBC Repeatability | Agreement (percentage of runs reporting the grade) for each morphology. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria. | Overall successful. One sample displayed variation in mean value between slide 1 and 2 for the specific cell type, resulting in a variation above acceptance criteria. |
PLT Repeatability | Agreement (percentage of runs reporting the actual PLT level) for each PLT level. | Met the acceptance criterion in all samples at all three sites. |
WBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on ANOVA on proportional cell count for each class. | Overall successful. |
RBC Reproducibility | Not explicitly stated in quantitative terms but implied to be successful based on agreement (percentage of runs reporting the grade). | Overall successful. |
PLT Reproducibility | Agreement (percentage of runs reporting the most prevalent level) for each PLT level. | Met the acceptance criterion in all samples. Overall successful. |
WBC Comparison (Accuracy) | Linear regression slope: 0.8-1.2. Intercept: ±0.2 for individual cell types with a normal level of >5%. | Accuracy evaluations successful in all studies, showing no systematic difference between DC-1 and DM1200. Agreement for Segmented Neutrophils, Lymphocytes, Eosinophils, and Monocytes were all within acceptance criteria. |
WBC Comparison (Distribution & Morphology) | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate. | Distribution: OA 89.8%, PPA 89.2%, NPA 90.4%. |
Morphology: OA 91.3%, PPA 88.6%, NPA 92.5%. | ||
RBC Comparison | Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate with 95% CI. | Color: OA 79.9% (76.4%-82.9%), PPA 87.8% (82.3%-91.8%), NPA 76.3% (71.9%-80.2%). |
Size: OA 88.2% (85.3%-90.6%), PPA 89.8% (86.3%-92.2%), NPA 84.8% (79.0%-89.2%). | ||
Shape: OA 85.2% (82.0%-87.8%), PPA 87.3% (82.3%-91.0%), NPA 83.8% (79.6%-87.3%). | ||
PLT Comparison | Cohen's Kappa coefficient ≥ 0.6. | Weighted Kappa 0.89405 (95% CI: 0.87062 to 0.91748), meeting the acceptance criterion. |
EMC Testing | Conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. | The test showed that the DC-1 is in conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015. |
Software V&V | Software verification and validation testing documentation provided as recommended by FDA. | Documentation was provided as recommended by FDA's Guidance for Industry and Staff. Software classified as "moderate" level of concern. |
Study Details
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- WBC Comparison: 598 samples (1196 slides, A and B slide).
- RBC Comparison: 586 samples.
- PLT Comparison: 598 samples.
- Repeatability and Reproducibility studies:
- WBC & RBC Repeatability: Five samples at each of three sites.
- PLT Repeatability: Four samples at each of three sites.
- WBC Reproducibility: Five samples, five slides prepared from each, processed daily at three sites for five days.
- RBC & PLT Reproducibility: Four samples, five slides prepared from each, processed daily at three sites for five days.
- Data Provenance: The studies were conducted in a clinical setting using three different laboratories. The specific country of origin is not mentioned, but the context implies clinical labs. The studies appear to be prospective as samples were processed for the studies.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- The document states that a "skilled operator trained in recognition of blood cells" identifies and verifies the suggested classification. For the comparison studies, the predicate device (DM1200) results served as the reference for comparison. While the specific number or qualifications of experts adjudicating the DM1200's results (which serve as the reference standard) are not explicitly stated, the context implies that a medically trained professional would be involved in generating and validating those results. The study design for WBC and RBC comparisons references CLSI H20-A2, which typically outlines protocols for comparison of manual differential leukocyte counts, implying that the "ground truth" (or reference standard) would have been established by trained medical technologists or pathologists according to accepted procedures.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document describes a comparison study between the CellaVision® DC-1 (test device) and the CellaVision® DM1200 (reference device, K092868). The results from a "skilled operator" using the predicate device were used as the reference data. Therefore, the adjudication method for the test set ground truth (if DM1200's results are considered the ground truth equivalent) is essentially the standard workflow of the predicate device with human operator verification. There is no explicit mention of an adjudication panel for discrepancies between human observers or between the device and a human. The DC-1's proposed classifications are compared against the verified classifications from the DM1200.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study evaluating human readers with AI vs without AI assistance was presented. The study focuses on comparing the new device (DC-1, which uses CNN for pre-classification) against a predicate device (DM1200, which uses ANN for pre-classification), both of which employ AI in a human-in-the-loop workflow. Both systems are "automated cell-locating devices" where a "skilled operator" reviews and verifies the pre-classified cells. The study does not quantify the improvement of a human reader with either AI system compared to a human reader performing a completely manual differential count without any AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone (algorithm only) performance study was not done or reported as part of the submission. The device is explicitly intended for "in-vitro diagnostic use in clinical laboratories" where a "skilled operator...identifies and verifies the suggested classification of each cell according to type." The device "pre-classifies" WBCs and "pre-characterizes" RBCs, but the operator's verification is integral to the intended use. The performance data for repeatability and comparability were evaluated on the "preclassified results suggested by the device" (Repeatability) or "verified WBC/RBC results from the DC-1" (Comparison), indicating the involvement of a human in the final assessment for the comparison studies.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the comparison studies was established by the verified results obtained from the predicate device (CellaVision® DM1200) operated by a skilled professional. This is implied to be equivalent to an expert consensus based on established laboratory practices, as the DM1200 is also a device requiring human verification. For repeatability and reproducibility studies, "preclassified results suggested by the device" or "verified data" were used, again implying that the 'ground truth' for evaluating the DC-1's consistency relies on its own outputs, or validated outputs, rather than an independent expert consensus on raw slides.
-
The sample size for the training set:
- The document does not specify the sample size used for training the Convolutional Neural Networks (CNN) for the CellaVision® DC-1. It only mentions that CNNs are used for preclassification and precharacterization.
-
How the ground truth for the training set was established:
- The document does not specify how the ground truth for the training set was established. It only states that the CNNs are "trained to distinguish between classes of white blood cells."
Ask a specific question about this device
(140 days)
JOY
The X100 with Field Peripheral Blood Smear Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in vitro diagnostic use only. For professional use only.
X100 with Full Field Peripheral Blood Smear Application (Scopio's Full Field PBS) automatically locates and presents high resolution digital images from fixed and stained peripheral blood smears. The user browses through the imaged smear to gain high-level general impressions of the sample. In conducting white blood cells (WBC) differential, the user reviews the suggested classification of each automatically detected WBC, and may manually change the suggested classification of any cell. In conducting red blood cells (RBC) morphology evaluation, the user can characterize RBC morphology on observed images. In conducting platelets estimation, the user reviews each automatically detected platelet and the suggested platelet estimation, and may manually change the detections or the estimation. The X100 with Full Field Peripheral Blood Smear Application is intended to be used by skilled users, trained in the use of the device and in the identification of blood cells.
The provided text describes the performance data for the X100 with Full Field Peripheral Blood Smear Application, comparing its results to those achieved by using a manual light microscope, which serves as the reference method.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state pre-defined acceptance criteria for the percentage values (e.g., correlation coefficient, efficiency, sensitivity, specificity). However, it consistently states that "All method comparison testing met acceptance criteria." This implies that the achieved performance met internal or regulatory thresholds. Based on the provided performance data, here's a table:
Test/Measurement | Acceptance Criteria (Implicitly Met) | Reported Device Performance |
---|---|---|
WBC Correlation (Deming Regression r) | Met Acceptance | |
Neutrophil (%) | Not explicitly stated | 98% |
Lymphocyte (%) | Not explicitly stated | 96% |
Monocyte (%) | Not explicitly stated | 95% |
Eosinophil (%) | Not explicitly stated | 98% |
WBC Differential (Efficiency) | Met Acceptance | |
Morphological Abnormality | Not explicitly stated | 96.82% (96.12% to 97.43% CI) |
Distributional Abnormality | Not explicitly stated | 95.75% (94.95% to 96.46% CI) |
Overall WBC | Not explicitly stated | 96.29% (95.77% to 96.76% CI) |
WBC Differential (Sensitivity) | Met Acceptance | |
Morphological Abnormality | Not explicitly stated | 85.46% (80.19% to 89.78% CI) |
Distributional Abnormality | Not explicitly stated | 88.83% (85.94% to 91.31% CI) |
Overall WBC | Not explicitly stated | 87.86% (85.38% to 90.06% CI) |
WBC Differential (Specificity) | Met Acceptance | |
Morphological Abnormality | Not explicitly stated | 97.79% (97.16% to 98.31% CI) |
Distributional Abnormality | Not explicitly stated | 97.43% (96.70% to 98.03% CI) |
Overall WBC | Not explicitly stated | 97.62% (97.16% to 98.02% CI) |
RBC Morphology (Overall Agreement) | Met Acceptance | |
Overall | Not explicitly stated | 99.77% (99.71% to 99.83% CI) |
Color Group | Not explicitly stated | 99.49% (99.14% to 99.73% CI) |
Shape Group | Not explicitly stated | 99.77% (99.68% to 99.84% CI) |
Size Group | Not explicitly stated | 99.61% (99.36% to 99.78% CI) |
Inclusions Group | Not explicitly stated | 100.00% (99.93% to 100.00% CI) |
Arrangement Group | Not explicitly stated | 96.65% (95.52% to 97.57% CI) |
Platelet Estimation (Deming Regression r) | Met Acceptance | |
Platelets Estimation (10^3/μL) | Not explicitly stated | 94% |
Platelet Estimation (Efficiency) | Met Acceptance | |
Overall | Not explicitly stated | 94.89% (92.78% to 96.53% CI) |
Platelet Estimation (Sensitivity) | Met Acceptance | |
Overall | Not explicitly stated | 90.00% (83.51% to 94.57% CI) |
Platelet Estimation (Specificity) | Met Acceptance | |
Overall | Not explicitly stated | 96.28% (94.11% to 97.82% CI) |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: A total of 645 specimens.
- 335 specimens were from normal (healthy) subjects.
- 310 specimens were from subjects with specific disease conditions.
- Data Provenance:
- Country of Origin: Not explicitly stated. The study was conducted at "three sites" but their geographical location is not specified.
- Retrospective or Prospective: Not explicitly stated, but the description "specimens were collected and analyzed at three sites" with slides being "randomly selected, blinded and read" suggests a prospective or at least prospectively designed evaluation of collected samples.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: "two examiners at each site." Since there were three sites, a total of 6 examiners were involved in establishing the ground truth.
- Qualifications of Experts: The document states that the ground truth was established by "trained examiners" using a "manual light microscope." It also mentions elsewhere that the device is intended for "skilled users, trained in the use of the device and in the identification of blood cells." While specific certifications or years of experience are not detailed, the implication is that these examiners are qualified clinical laboratory professionals adept at manual blood smear analysis.
4. Adjudication method for the test set
- The text states, "The slides were randomly selected, blinded and read by two examiners at each site." It does not explicitly mention a formal adjudication method (e.g., 2+1, 3+1 consensus). It simply states that results were compared between the "Test Method" (X100) and the "Reference Method" (manual light microscope). It is implied that the manual readings by the two examiners constituted the reference. There is no information provided about how discrepancies between the two examiners' manual readings (if any) were resolved or if their results were averaged.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not the primary focus described.
- The study primarily functions as a method comparison study, comparing the device's performance (which assists human readers) directly against the manual light microscope method (the established reference/ground truth).
- The device "assists a qualified technologist" by locating and displaying images and suggesting classifications. The study evaluates the device-assisted technologist's performance against the manual technologist's performance.
- Therefore, an "effect size of how much human readers improve with AI vs without AI assistance" is not reported in the context of a dedicated MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, a standalone (algorithm-only) performance was not described or evaluated.
- The device's intended use and the study design clearly state that it "assists a qualified technologist." The performance data reflects the combined system of the device and the human user, where the user reviews and can modify the device's suggestions (e.g., "may manually change the suggested classification of any cell," "may manually change the detections or the estimation"). It is a human-in-the-loop system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth was established by expert readings using a manual light microscope. This is effectively expert consensus if the two examiners at each site agreed, or if their readings were somehow combined to form the reference. The manual light microscope process itself is described as the "Reference Method."
8. The sample size for the training set
- The document does not specify the sample size for the training set. The performance data provided is solely for the "Method Comparison" study, which used the 645 specimens as the test set.
9. How the ground truth for the training set was established
- The document does not provide details on how the ground truth for the training set was established. Since the training set size is not mentioned, neither is the method for its ground truth. However, given that the device's "Analysis Technique" for WBCs uses "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," it is highly probable that the training ground truth was also established by expert classification of blood cells.
Ask a specific question about this device
(90 days)
JOY
The Sysmex® UD-10 Fully Automated Urine Particle Digital Imaging Device for locating, digitally storing and displaying microscopic images captured from urine specimens. The Sysmex® UD-10 locates and presents particles and cellular elements based on size ranges. The images are displayed for review and classification by a qualified clinical laboratory technologist on the Urinalysis Data Manager (UDM). This device is intended for in vitro diagnostic use in conjunction with a urine particle counter for screening patient populations found in clinical laboratories.
The Sysmex® UD-10 is a medical device that captures images of cells and particles found in urine with a camera and displays the images on a display screen. The displayed data consists of images of individual particles that are extracted from the original captured whole field images. The device sorts urine particle images based on their size into eight groups (Class 1-8). These images are transferred to the UDM (Urinalysis Data Manager), where the operator enters the classification of the particle images based on their visual examination. The classification of the particles by the operator is a designation of what type of particles are observed (e.g., WBCs, RBCs, casts, bacteria).
The Sysmex UD-10 is a device for locating, digitally storing, and displaying microscopic images captured from urine specimens. It presents particles and cellular elements based on size ranges for review and classification by a qualified clinical laboratory technologist.
Here's an analysis of the acceptance criteria and the studies performed:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria values for agreement percentages in the precision, repeatability, and method comparison studies (except for the minimum requirement for overall agreement in reproducibility and repeatability). However, it does provide conclusions based on the results meeting statistical thresholds. The carryover study had an acceptance criterion within ±1.00%.
Study Type | Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|---|
Reproducibility | Overall Agreement | Lower 95% Confidence Limit > 80.9% (minimum requirement) | 97.9% (95% CI: 95.2%, 99.1%) |
Repeatability | Overall Agreement | Lower 95% Confidence Limit > 85.2% (minimum requirement) | 100.0% (95% CI: 97.8%, 100.0%) |
Carryover | Carryover Effect | Within ±1.00% | RBC: 9.82x10^-23% to 4.16x10^-14% (PASS) |
BACT: 0.00% to -2.57x10^-6% (PASS) | |||
WBC: 1.05x10^-12% to 100.00% (FAIL at one site, but deemed clinically insignificant) | |||
Method Comparison | Overall Agreement (UD-10 vs. Manual Microscopy) | Exceed 85.2% (proposed requirement) | 92.0% (95% CI: 89.8%, 93.7%) |
2. Sample Size and Data Provenance
Reproducibility Study:
- Sample size: 240 evaluations (from 120 samples of abnormal and normal QC material, processed twice a day for a minimum of 5 days).
- Data provenance: Prospective, U.S. clinical sites (4 sites). Commercially available MAS® UA control material was used.
Repeatability Study:
- Sample size: 170 evaluations (from an unspecified number of normal residual urine samples, each assayed in 5 replicates).
- Data provenance: Prospective, U.S. clinical sites (4 sites). Normal residual urine samples, collected without preservatives.
Carryover Study:
- Sample size: Not explicitly stated as a total number of samples, but involved High and Low concentration samples for BACT (4 sites), WBC (3 sites), and RBC (3 sites). Each sample was split into 3 aliquots (3 high, 3 low) and run consecutively. Results are presented for 3 replicates of high and 3 replicates of low samples per parameter per site.
- Data provenance: Prospective, U.S. clinical sites (4 sites for BACT, 3 for WBC and RBC). Residual urine samples, collected without preservatives.
Method Comparison Study:
- Sample size: 746 abnormal and normal urine samples.
- Data provenance: Prospective, U.S. clinical sites (4 sites). Residual urine samples from daily routine laboratory workload, collected without preservatives.
3. Number of Experts and Qualifications for Ground Truth
Reproducibility Study:
- Number of experts: One technologist per site (total of 4 technologists across 4 sites).
- Qualifications: "Technologist." No specific experience level is mentioned.
Repeatability Study:
- Number of experts: Two technologists per sample per site, who independently reviewed and identified particle images.
- Qualifications: "Technologist." No specific experience level is mentioned.
Carryover Study:
- Number of experts: One technologist per site.
- Qualifications: "Technologist." No specific experience level is mentioned.
Method Comparison Study:
- Number of experts: Two technologists per sample per site. One classified elements on the UD-10, and a second performed visual read using manual light microscopy.
- Qualifications: "One technologist" and "a second technologist." No specific experience level is mentioned.
4. Adjudication Method
Reproducibility, Repeatability, and Carryover Studies:
- No explicit adjudication method is described. For repeatability, two technologists independently reviewed images, and "Each technologist's results were treated and recorded as an independent observation."
Method Comparison Study:
- No explicit adjudication method is described. One technologist used the UD-10, and another used manual microscopy. Their results were compared.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of a formal MRMC comparative effectiveness study in the sense of evaluating how much human readers improve with AI vs. without AI assistance. The study compares the UD-10 device's performance (which incorporates digital imaging and sorting for human review) against manual microscopy. The UD-10 is a digital imaging device, not strictly an 'AI' device in the typical sense of providing automated diagnosis or enhanced AI assistance to human readers for diagnostic interpretation (beyond presenting sorted images). The study is essentially a method comparison between the UD-10-assisted workflow and traditional manual microscopy.
6. Standalone Performance (Algorithm Only)
The Sysmex UD-10 is described as a device that "locates and presents particles and cellular elements based on size ranges. The images are displayed for review and classification by a qualified clinical laboratory technologist." This indicates that the device requires human-in-the-loop for classification and is not a standalone diagnostic algorithm. Its performance is implicitly tied to how well technologists can use the displayed images. The "Overall Agreement" metrics in the studies reflect the performance of the system (device + technologist).
7. Type of Ground Truth Used
Reproducibility Study:
- Ground truth: Expected results from commercially available MAS® UA control material (Level 1 and Level 2).
Repeatability Study:
- Ground truth: Reference results provided by screening samples with the Sysmex UF-1000i urine analyzer (K070910).
Carryover Study:
- Ground truth: Determined by Sysmex UF-1000i results for high and low concentration samples.
Method Comparison Study:
- Ground truth: For the initial comparison, manual microscopy was considered the comparative method. For the referee comparison, the Sysmex UF-1000i (K070910) was used as the referee method to evaluate agreement between UD-10 and manual microscopy.
8. Sample Size for the Training Set
The document is a 510(k) summary for a medical device that captures and displays images for human review, not an AI/ML algorithm that requires a "training set" in the conventional sense of machine learning model development. Therefore, there is no mention of a training set sample size. The device uses size ranges to sort images, indicating a rule-based or conventional image processing approach rather than a complex AI model that learns from diverse training data.
9. How the Ground Truth for the Training Set Was Established
As noted above, this device does not appear to involve a machine learning training set in the way a typical AI algorithm would. Thus, this question is not applicable based on the provided information.
Ask a specific question about this device
(89 days)
JOY
The CellaVision DM1200 with the Advanced RBC Application is an automated cell-locating device, intended for in-vitro diagnostic use.
The CellaVision DM1200 with the Advanced RBC Application automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.
The CellaVision DM1200 with the Advanced RBC Application is intended for blood samples that have been flagged as abnormal by an automated cell counter.
The CellaVision DM1200 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
The CellaVision DM96 with the Advanced RBC Application is an automated cell-locating device, intended for in-vitro diagnostic use.
The CellaVision DM96 with the Advanced RBC Application automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.
The CellaVision DM96 with the Advanced RBC Application is intended for blood samples that have been flagged as abnormal by an automated cell counter.
The CellaVision DM96 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
The Advanced RBC Application is substantially equivalent to the RBC functionality included in the predicate DM Systems. It pre-characterizes the morphology of the red blood cells in a sample based on abnormal color, size, and shape (Poikilocytosis). In addition to that, the Advanced RBC Application also pre-characterizes based on different types of Poikilocytosis and on the presence of certain inclusions.
The DM Systems display the result of the RBC pre-characterization as the percentage of abnormal cells for each morphological characteristic and as an automatically calculated grade (0 - normal through 3 - marked), corresponding to that percentage. It also displays an overview image of the RBC monolayer. The difference between the current RBC functionality and Advanced RBC Application is the analysis technique, which enables the Advanced RBC Application to pre-characterize RBC into 21 morphological characteristics as opposed to the current RBC functionality with 6 morphological characteristics. The cell images are pre-characterized into different groups of morphological characteristics based on size, color, shape and inclusion using segmentation, feature calculation and the deterministic artificial neural networks (ANNs) trained to distinquish between morphology characteristics of red blood cells.
Another difference is that the red blood cells, pre-characterized by the Advanced RBC Application, can be displayed both in an overview and in individual images on the screen, while the current RBC functionality displays the pre-characterized red blood cells in an overview image only.
As in the current RBC functionality, the user reviews the overview image and can change the characterization by manually changing the grades for any morphological characteristic. With the Advanced RBC Application, the user can also view individual cells, grouped by morphological characteristic and change the characterization by reclassifying individual cells.
The provided text describes the CellaVision DM96 and DM1200 with Advanced RBC Application, an automated cell-locating device. Here's a breakdown of the requested information based on the text:
1. A table of acceptance criteria and the reported device performance
The document states that a clinical evaluation was conducted, and the results "met the predefined acceptance criteria" for various metrics. However, the precise quantitative acceptance criteria and the exact reported performance values are not explicitly stated in this summary. The summary only mentions that the results fulfilled the acceptance criteria.
Metric (Morphology Group) | Acceptance Criteria | Reported Device Performance |
---|---|---|
RBC group Size | Overall Agreement, Positive Percent Agreement (PPA), Negative Percent Agreement (NPA) | "fulfilled the acceptance criteria" |
Groups Color, Shape, Inclusions, and clinical significant morphologies | Efficiency, Sensitivity, Specificity | "fulfilled the acceptance criteria" |
Individual morphological characteristics | Sensitivity, Specificity | "fulfilled the target limits" |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size: The document states, "Samples were collected and tested for RBC characterization on DM96 and DM1200 at different laboratories." However, the exact sample size for the clinical evaluation (test set) is not specified.
- Data provenance: The samples were collected "from routine workflow from hospital laboratories" and "in accordance with the target patient population, i.e. from samples flagged as abnormal by an automated cell counter." This suggests prospective collection from clinical settings, but specific countries of origin are not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
For the clinical evaluation, the "manual microscopy (Reference Method)" was used as the primary comparator for most morphology groups. For the "RBC group Size," an "automated cell counter" was used as a "convenient predicate device."
- The document implies that the ground truth for most RBC characteristics was established by manual microscopy, which would involve human experts. However, the number of experts and their specific qualifications are not provided. The device's intended use states, "The CellaVision DM96/DM1200 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells," which implies that the reference method would also involve such skilled operators.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify an adjudication method for establishing the ground truth from manual microscopy or for resolving discrepancies between readers (if multiple readers were used). It simply refers to "manual microscopy (Reference Method)" as the comparator.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not explicitly described in the provided text. The study conducted was a comparison between the automated device (CellaVision Advanced RBC Application) and manual microscopy (Reference Method), not a study evaluating human reader improvement with AI assistance.
- The device "automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type." While the device assists the human, the study's objective was about the equivalence of the device's characterization results to manual microscopy, not the improvement of human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, a standalone (algorithm only) performance study was not described. The device is explicitly designed for a human-in-the-loop workflow: "The operator identifies and verifies the suggested classification of each cell according to type." Therefore, the evaluation would inherently include this human interaction. The clinical evaluation compared the "Advanced RBC Application installed on CellaVision DM96 and CellaVision DM1200 (Test Methods)" to the manual method, implying system performance with human verification.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth used was primarily:
- Expert consensus from manual microscopy for most RBC morphological characteristics.
- Automated cell counter results for the "RBC group Size" (Macrocytes, Microcytes, and Anisocytosis), as manual microscopy was deemed "highly difficult, time consuming and thereby impractical" for this group.
8. The sample size for the training set
The document does not specify the sample size used for the training set of the deterministic artificial neural networks (ANNs). It only mentions that the ANNs were "trained to distinguish between morphology characteristics of red blood cells."
9. How the ground truth for the training set was established
The document does not explicitly state how the ground truth for the training set was established. It mentions that the ANNs were "trained to distinguish between morphology characteristics of red blood cells," implying that labeled data was used for training, but the method of obtaining these labels (e.g., expert annotations, specific pathological confirmation) is not detailed.
Ask a specific question about this device
(357 days)
JOY
DM1200 is an automated system intended for in-vitro diagnostic use.
The body fluid application is intended for differential count of white blood cells. The system automatically locates and presents images of cells on cytocentrifuged body fluid preparations. The operator identifies and verifies the suggested classification of each cell according to type.
DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
CellaVision DM1200 with the body fluid application automatically locates and presents images of nucleated cells on cytocentrifuged body fluid preparations. The system suggests a classification for each cell and the operator verifies the classification and has the opportunity to change the suggested classification of any cell.
The system preclassifies to the following WBC classes: Unidentified, Neutrophils, Eosinophils, Lymphocytes, Macrophages (including Monocytes) and Other. Cells preclassified as Basophils, Lymphoma cells, Atypical lymphocytes, Blasts and Tumor cells are automatically forwarded to the cell class Other.
Unidentified is a class for cells and objects which the system has pre-classified with a low confidence level.
Here's an analysis of the provided text, outlining the acceptance criteria and study details for the CellaVision DM1200 with the body fluid application:
Acceptance Criteria and Device Performance Study for CellaVision DM1200 with Body Fluid Application
The CellaVision DM1200 with body fluid application is an automated cell-locating device intended for in-vitro diagnostic use, specifically for the differential count of white blood cells in cytocentrifuged body fluid preparations. The system automatically locates and presents cell images, suggests a classification, and requires operator verification.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state pre-defined "acceptance criteria" as pass/fail thresholds for accuracy or precision. Instead, it presents the results of a method comparison study between the CellaVision DM1200 (Test Method) and its predicate device, CellaVision DM96 (Reference Method), for various leukocyte classifications. The implied acceptance criteria are that the DM1200 demonstrates comparable accuracy and precision to the predicate device.
Cell Class | Acceptance Criteria (Implied: Comparable to Predicate) | Reported Device Performance (CellaVision DM1200 vs. DM96) |
---|---|---|
Accuracy (Regression Analysis: DM1200 = Slope * DM96 + Intercept) | ||
Neutrophils | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9969x + 0.0050, R² = 0.9932 (95% CI Slope: 0.9868-1.0070, 95% CI Intercept: 0.0004-0.0096) |
Lymphocytes | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9815x + 0.0016, R² = 0.9829 (95% CI Slope: 0.9656-0.9973, 95% CI Intercept: -0.0049-0.0081) |
Eosinophils | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 1.1048x - 0.0002, R² = 0.9629 (95% CI Slope: 1.0782-1.1314, 95% CI Intercept: -0.0007-0.0003) |
Macrophages | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 1.0067x - 0.0050, R² = 0.9823 (95% CI Slope: 0.9901-1.0232, 95% CI Intercept: -0.0125-0.0024) |
Other cells | Regression where slope is close to 1 and intercept close to 0, with high R² | y = 0.9534 + 0.0032, R² = 0.9273 (95% CI Slope: 0.9207-0.9861, 95% CI Intercept: -0.0002-0.0065) |
Precision/Reproducibility (Short-term Imprecision) | ||
Neutrophils | SD % comparable between test and reference method | Test Method: Mean % 32.0, SD % 3.2; Reference Method: Mean % 31.6, SD % 3.4 |
Lymphocytes | SD % comparable between test and reference method | Test Method: Mean % 30.1, SD % 5.6; Reference Method: Mean % 30.5, SD % 5.7 |
Eosinophils | SD % comparable between test and reference method | Test Method: Mean % 0.6, SD % 0.7; Reference Method: Mean % 0.5, SD % 0.6 |
Macrophages | SD % comparable between test and reference method | Test Method: Mean % 35.3, SD % 5.8; Reference Method: Mean % 35.5, SD % 6.2 |
Other cells | SD % comparable between test and reference method | Test Method: Mean % 2.1, SD % 1.7; Reference Method: Mean % 1.9, SD % 2.5 |
The conclusion states that the short-term imprecision was found to be equivalent for the test method and the reference method, and the accuracy results (high R-squared values, slopes close to 1, and intercepts close to 0) demonstrate substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 260 samples.
- CSF: 62 samples
- Serous fluid: 151 samples
- Synovial fluid: 47 samples
- Data Provenance:
- Country of Origin: Not explicitly stated, but samples were collected from "two sites." Given the submitter is in Sweden, and the regulatory contact is in the USA, it's unclear if these sites were in Sweden, the USA, or elsewhere.
- Retrospective or Prospective: Not explicitly stated, but the description "collected from two sites" and then analyzed suggests a prospective collection or at least fresh samples for the study.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated for the establishment of ground truth for the test set's initial classifications. However, the document mentions:
- "The results were then verified by skilled human operators." This indicates human review post-analysis by both the test and reference methods.
- The "Intended Use" section states: "DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells."
- Qualifications of Experts: "Skilled human operators, trained in the use of the device and in recognition of blood cells." No specific professional qualifications (e.g., "radiologist with 10 years of experience") are provided.
4. Adjudication Method for the Test Set
The document states: "The results were then verified by skilled human operators." It does not specify a multi-reader adjudication method like 2+1 or 3+1. It implies a single operator verification for each result generated by both the test and reference methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance was not explicitly described. This study was a method comparison between two devices (one with the new application, one being the predicate) with human verification. The device's function is to suggest classifications, which implies an assistive role, but the study design was not an MRMC study comparing human performance with and without AI.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance was implicitly done. The "Test Method" (CellaVision DM1200) "suggests a classification for each cell," meaning the algorithm performs an initial classification without human intervention. The reported accuracy metrics ($R^2$, slope, intercept) compare these suggested classifications to those obtained by the reference method (DM96, which also involves algorithmic preclassification). However, the study concludes with human verification of these results, suggesting the standalone performance without the "human-in-the-loop" step described in the intended use is not the final reported performance. The "Accuracy results" table (Table 3.3) and "Precision/Reproducibility" table (Table 3.4) reflect the device's performance before the final human verification step that might change classifications.
7. Type of Ground Truth Used
The ground truth used for the comparison was established by the predicate device (CellaVision DM96) with its own human verification, after undergoing "a 200-cell differential count... with both the test method and the reference method. The results were then verified by skilled human operators." Therefore, it's a form of expert-verified reference measurement. It is not pathology, or outcomes data.
8. Sample Size for the Training Set
The document does not explicitly state the sample size for the training set used to develop the CellaVision DM1200's classification algorithms. It mentions "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but no details about the training data are provided within this summary.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly describe how the ground truth for the training set was established. It states that the ANNs were "trained to distinguish between classes of white blood cells," implying that a labeled dataset was used for training, but the process of creating these labels (e.g., expert consensus, manual review) is not detailed in this 510(k) summary.
Ask a specific question about this device
(302 days)
JOY
The EasyCell is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in vitro diagnostic use only. For professional use only.
The EasyCell automatically locates and presents images of blood cells on peripheral smears. The operator reviews the suggested classification of each white cell according to type and may manually change the suggested classification of any cell. The operator can characterize red cell morphology and estimate platelets based on observed images. The EasyCell is intended to be used by skilled operators, trained in the use of the device and in the identification of blood cells.
This document describes the performance characteristics and acceptance criteria for the EasyCell Automated Cell-Locating Device (K092116).
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the "Correlation between Reference Method and Test Method" and "Sensitivity and Specificity" tables provided in the submission. The device is deemed acceptable if its performance is "good" or "equivalent" to the reference method. Specific quantitative targets for correlation coefficients, sensitivity, and specificity should have been defined as acceptance criteria but are not explicitly stated as such. Based on the provided data, the device meets these implicit criteria.
Metric | Acceptance Criteria (Implied) | Reported Device Performance (Full Sample) | Reported Device Performance (100 vs. 200 cells) |
---|---|---|---|
WBC Differential (Correlation Coefficient r) | ≥ 0.90 for Neutrophils, Lymphocytes, Eosinophils; ≥ 0.80 for Monocytes | Neutrophil: 0.99 | |
Lymphocyte: 0.98 | |||
Monocyte: 0.93 | |||
Eosinophil: 0.97 | Neutrophil: 0.96 | ||
Lymphocyte: 0.95 | |||
Monocyte: 0.83 | |||
Eosinophil: 0.93 | |||
WBC Differential (Slope) | Close to 1.00 | Neutrophil: 1.00 | |
Lymphocyte: 1.00 | |||
Monocyte: 0.85 | |||
Eosinophil: 0.95 | Neutrophil: 0.99 | ||
Lymphocyte: 1.00 | |||
Monocyte: 0.83 | |||
Eosinophil: 0.99 | |||
WBC Differential (Intercept) | Close to 0.00 | Neutrophil: 0.39 | |
Lymphocyte: 0.88 | |||
Monocyte: 0.59 | |||
Eosinophil: 0.066 | Neutrophil: -0.15 | ||
Lymphocyte: 0.73 | |||
Monocyte: 0.72 | |||
Eosinophil: -0.01 | |||
Efficiency (% agreement) | Not explicitly specified, assumed to be high | Overall: 84% | 200 cells: 84% |
100 cells: 83% | |||
Sensitivity | Not explicitly specified, assumed to be high | Overall: 91% | 200 cells: 91% |
100 cells: 90% | |||
Specificity | Not explicitly specified, assumed to be high | Overall: 72% | 200 cells: 72% |
100 cells: 70% | |||
Platelet Estimate Accuracy | "Good agreement" (Cohen's kappa) | "Good agreement" between methods (Cohen's kappa statistic calculated, but value not provided) | N/A |
Red Blood Cell Morphology Accuracy | "Good agreement" (e.g., >90%) | >90% agreement between methods | N/A |
Between Run Precision | Equivalent to Reference Method | "Test Method has equivalent precision to the Reference Method." | N/A |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A total of 304 specimens were collected and analyzed. This sample included:
- 155 specimens from normal (healthy) subjects.
- 149 specimens from subjects with specific disease conditions.
- Data Provenance: The data was collected and analyzed at three sites. The country of origin is not explicitly stated, but the submission is to the US FDA, implying clinical sites within the US or compliant with US standards. The study design is prospective, as specimens were collected and analyzed for this method comparison.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: The ground truth (Reference Method) for the test set was established by two trained technologists at each of the three sites. This totals 6 experts for the initial reading, though the independent analysis by two technologists per site suggests independent assessments that were likely compared or adjudicated.
- Qualifications of Experts: The experts are described as "trained technologist" or "skilled operators, trained in the use of the device and in the identification of blood cells." Specific experience levels (e.g., years of experience or board certification) are not provided.
4. Adjudication Method for the Test Set
The document states that slides were "randomly selected, blinded and read by two technologists at each site." It does not explicitly mention an adjudication method (e.g., 2+1 or 3+1). It is implied that the readings from the two technologists at each site formed the "Reference Method" for comparison, but how discrepancies between these two readers were resolved (if at all) is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
An MRMC study, comparing human readers with AI assistance versus without AI assistance, was not explicitly detailed in the provided text. The study design focuses on comparing the EasyCell (with human operator review) to a manual reference method. The "Test Method" described involves an operator reviewing and potentially reclassifying the suggested classifications from the EasyCell. Therefore, the reported performance is already that of the human-in-the-loop system, not "without AI assistance." As such, an effect size of how much human readers improve with AI vs. without AI assistance is not presented.
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
A standalone performance study of the algorithm without human-in-the-loop review was not conducted or reported. The device description explicitly states: "The operator reviews the suggested classification... and may manually change the suggested classification of any cell." The "Test Method" in the performance study is defined as "EasyCell using Examiner Review." This indicates that all reported performance metrics include the human element of review and potential reclassification, making it a human-in-the-loop system, not a standalone algorithm.
7. Type of Ground Truth Used
The ground truth used for comparison was expert consensus / manual microscopic examination. Specifically, it is referred to as the "Reference Method," which is described as "a manual differential microscopic examination of the peripheral blood slides by a trained technologist."
8. Sample Size for the Training Set
The sample size for the training set is not provided in the given excerpts. The document describes the "ANN's trained to distinguish between classes of white blood cells" but does not give details about the data used for this training.
9. How the Ground Truth for the Training Set was Established
The method for establishing the ground truth for the training set is not explicitly stated in the provided excerpts. It can be inferred that it would involve expert classification of blood cells, similar to the reference method for the test set, but specific details are absent.
Ask a specific question about this device
(63 days)
JOY
DM1200 is an automated cell-locating device.
DM1200 automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.
DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.
DM1200 is an automated cell-locating device for differential count of white blood cells, characterization of red blood cell morphology and platelet estimation. DM1200 consists of a slide scanning unit (a robot gripper, a microscope and a camera) and a computer system containing the acquisition and classification software "CellaVision® DM software".
Here's an analysis of the acceptance criteria and study details for the CellaVision® DM1200 Automated Hematology Analyzer, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary states that "The results fulfilled the pre-defined requirements" for accuracy for cell-location, accuracy for verified classification for each cell class, precision for verified classification for each cell class, and clinical sensitivity and specificity. However, the specific quantitative acceptance criteria or the reported performance metrics (e.g., specific accuracy percentages, precision values, sensitivity, and specificity thresholds) are not detailed in the provided document. The summary only generally claims that the results fulfilled these requirements.
To illustrate what such a table would look like if the data were present, here's a hypothetical structure:
Hypothetical Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Hypothetical) | Reported Device Performance (Hypothetical) |
---|---|---|
Accuracy for cell-location | > 95% | > 98% |
Accuracy for verified classification | > 90% for all major cell classes | > 92% for all major cell classes |
Precision for verified classification | CV 90% | > 93% |
Clinical Specificity (overall WBC diff) | > 85% | > 88% |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The summary mentions "clinical tests" and "a pre-defined protocol for differentiation of the approved standard CLSI, H20-A2, Reference Leukocyte (WBC) Differential Count (Proportional) and Evaluation of Instrumental Methods." While this indicates a formal study, the specific number of blood smears or cells included in the test set is not mentioned.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data
- Retrospective or Prospective: Not explicitly stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: The study was performed according to CLSI H20-A2, which typically involves experienced morphologists. However, the specific qualifications (e.g., "radiologist with 10 years of experience") are not provided in the summary. The device's intended use also states it needs to be used by "skilled operators, trained in the use of the device and in recognition of blood cells," implying that the experts establishing ground truth would be highly skilled in blood cell morphology.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The CLSI H20-A2 standard, which the study adhered to, typically uses an agreement process among multiple experts to establish a reference differential, which could involve consensus or adjudication. However, the exact method (e.g., 2+1, 3+1, none) is not detailed in this summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: The summary discusses "comparison to the predicate device DM96 for differentiation of white blood cells" and "to confirm equivalence with the predicate device." However, it focuses on the device's performance against the predicate's results and adherence to standards, rather than a comparative effectiveness study showing human readers' improvement with AI vs. without AI assistance. The DM1200 is described as an "automated cell-locating device" where "The operator identifies and verifies the suggested classification of each cell." This indicates a human-in-the-loop system, but the study described does not quantify the improvement of human readers using the AI versus without it.
- Effect Size of Human Reader Improvement: Not applicable, as this specific type of MRMC study was not described.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Standalone Performance: No, a standalone performance study (algorithm only) was not explicitly described for the DM1200. The device's intended use clearly states: "The operator identifies and verifies the suggested classification of each cell according to type." This indicates that the device is designed to be used with a human reviewer in the loop, validating the AI's suggestions. The "accuracy for the verified classification" metric further supports this, suggesting the final classification is human-verified.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus with adherence to the CLSI H20-A2, Reference Leukocyte (WBC) Differential Count (Proportional) and Evaluation of Instrumental Methods; 2nd Ed. standard. This standard provides guidelines for establishing reference differentials, which typically rely on experienced morphologists. The summary states: "performed according to a pre-defined protocol for differentiation of the approved standard CLSI, H20-A2."
8. Sample Size for the Training Set
- Sample Size for Training Set: Not mentioned. The summary focuses on the clinical tests for demonstrating equivalence, not on the details of the model's development or training data.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not mentioned. Similar to the training set size, the details of how the training data's ground truth was established are not provided in the 510(k) summary.
Ask a specific question about this device
Page 1 of 4