Search Filters

Search Results

Found 30 results

510(k) Data Aggregation

    K Number
    K243144
    Manufacturer
    Date Cleared
    2025-06-27

    (270 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Application
    X100HT with Full Field Peripheral Blood Smear (PBS) Application
    Regulation Number: 21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The X100/X100HT with Full Field Peripheral Blood Smear Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in-vitro diagnostic use only. For professional use only.

    Device Description

    The X100 / X100HT with Full Field Peripheral Blood Smear (PBS) Application ("Full Field PBS") is a digital cell morphology solution, presenting high resolution digital images of fixed and stained microscopy Peripheral Blood Smears. The Full Field PBS was previously cleared by the Agency on October 2, 2020, throughout the review of K201301 and on May 3, 2022, throughout the review of K220013. The system automatically locates and presents images of peripheral blood cells and streamlines the PBS analysis process with a review workflow composed of four steps: (1) full field review, (2) white blood cells (WBC) review (DSS for WBC Pre-Classification is available), (3) red blood cells (RBC) review and (4) platelet review (DSS for platelet estimation is available).

    Under the proposed modification, subject of this 510(k) submission, additional DSS component is added to the RBC and Platelet review steps. Concerning RBC analysis, system-suggested RBC morphological pre-gradings are added as proposed DSS to the user by means of a dotted line around a suggested grading selection box. Notably, the user is still required to review the slide and actively mark the final grading, exactly as performed in the cleared workflow review.

    The same approach is used with regard to the update to the platelet review step; A system-suggested platelet clump indication is presented to the user by means of a dotted line around a selection box. Notably, the user is still required to manually mark whether platelet clumps were detected, exactly as currently performed in the cleared workflow review.

    The changes under discussion do not affect the cleared indications for use or intended use; the user's workflow of scanning and analyzing peripheral blood smears using the Full Field PBS Application remains otherwise unchanged as well.

    AI/ML Overview

    The Scopio Labs X100/X100HT with Full Field Peripheral Blood Smear (PBS) Application has received FDA 510(k) clearance (K243144) based on a modification that adds system-suggested pre-gradings for RBC morphology and platelet clump indications. The study involved a method comparison study, repeatability study, reproducibility study, and software verification and validation.

    1. Acceptance Criteria and Reported Device Performance

    The FDA clearance document does not explicitly state the numerical acceptance criteria for the method comparison study. However, it indicates that "The results met the pre-defined acceptance criteria." The reported performance metrics are presented as overall agreement, Positive Percent Agreement (PPA), and Negative Percent Agreement (NPA).

    CategoryOverall AgreementPPANPA
    RBC Color97.88% (97.29% to 98.42%)98.33% (97.48% to 99.10%)97.61% (96.81% to 98.33%)
    RBC Inclusions97.90% (97.50% to 98.27%)86.73% (81.66% to 91.23%)98.41% (98.06% to 98.78%)
    RBC Shape96.22% (95.92% to 96.50%)95.35% (94.50% to 96.12%)96.40% (96.06% to 96.71%)
    RBC Size95.58% (95.06% to 96.13%)99.42% (99.03% to 99.75%)92.72% (91.82% to 93.70%)
    PLT Clumping87.08% (85.25% to 88.92%)86.11% (82.13% to 89.91%)87.39% (85.39% to 89.39%)

    The precision studies (repeatability and reproducibility) also "met the pre-defined acceptance criteria." However, the specific numerical criteria for these studies are not detailed in the provided document.

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: A total of 1200 anonymized PBS slides were used for the method comparison study.
    • Data Provenance: The slides were collected from the laboratory routine workload of three medical centers. The country of origin is not specified, but the submitter information lists "Tel Aviv, Israel" as the sponsor address, which may suggest the data originated from Israel, though this is not explicitly stated for the clinical evaluation. The data is retrospective, as it was collected from "routine workload."

    3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)

    The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications (e.g., years of experience). It implies that the ground truth was established by referring to "pre-defined acceptance criteria" and a "method comparison study," which typically involves comparison against human expert interpretation. However, the details of this expert interpretation are not provided.

    4. Adjudication Method for the Test Set

    The adjudication method used to establish the ground truth for the test set is not specified in the provided document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The study described is a method comparison study between the modified device (with added DSS) and either the previous cleared device or human expert consensus, but not a comparative effectiveness study measuring improvement in human reader performance.

    6. Standalone Performance Study (Algorithm Only)

    The document describes the device as providing "system-suggested RBC morphological pre-gradings" and "system-suggested platelet clump indication" by means of a dotted line, and that "the user is still required to review the slide and actively mark the final grading." This indicates that the device functions as an assistive tool (human-in-the-loop) rather than a completely standalone algorithm making final diagnoses. The performance metrics presented (Overall Agreement, PPA, NPA) are likely comparisons of the system's suggestions against ground truth, which implicitly reflects a standalone component, but the final reported performance is within the context of assisting a qualified technologist, not solely the algorithm's output. Therefore, a purely standalone (algorithm-only without human-in-the-loop) performance is not explicitly presented or claimed for clinical use.

    7. Type of Ground Truth Used

    Based on the description of the "method comparison study" and the nature of the device (assisting a qualified technologist), the ground truth was likely established by expert consensus or through a reference method performed by qualified experts in hematology/morphology. The context of "laboratory routine workload" and comparison implies an existing gold standard interpretation.

    8. Sample Size for the Training Set

    The document does not provide information on the sample size used for the training set of the AI/DSS components. It only details the test set used for performance evaluation of the modified device.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established. It states that "Cell images are analysed using standard mathematical methods, including deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells." This implies a training process, but the details of ground truth establishment for that training are omitted.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221309
    Date Cleared
    2023-09-19

    (502 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Potomac, Maryland 20854

    Re: K221309

    Trade/Device Name: AI100 with Shonit Regulation Number: 21 CFR 864.5260
    Name: | Automated cell-locating device |
    | Regulation Number: | 21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI100 with Shonit™ is a cell locating device intended for in-vitro diagnostic use in clinical laboratories.

    A1100 with Shonit™ is intended for differential count of White Blood Cells (WBC), characterization of Red Blood Cells (RBC) morphology and Platelet morphology. It automatically locates blood cells on peripheral blood smears and presents images of the blood cells for review.

    A skilled operator, trained in the use of the device and in the review of blood cells, identifies each cell according to type.

    Device Description

    The AI100 with Shonit™ device consists of a high-resolution microscope with LED illumination, and compute parts such as the motherboard, CPU, RAM, Wi-Fi dongle, SSD containing AI100 with Shonit™ software, motorized XYZ stage, a camera with firmware, PCB and its firmware for driving motor and LED, SMPS, power supply and a casing. It is capable of handling one Peripheral Blood Smear (PBS) slide at a time.

    Software plays an intrinsic role in the A1100 with Shonit™ device, and the combination of hardware and software works together for the device to achieve its intended use. The main functions of the software can be summarized as follows:

    • Allow the user to set up the device and perform imaging of a PBS slide. ●
    • Control the hardware components (Camera, LEDs, Stages, etc) to take images of a . PBS slide.
    • Store and manage images and other data corresponding to the PBS slide and present them to the user.
    • Analyze images and allows user to identify components in the images and create a ● report for review.
    • Allow the user to finalize, download and print a report.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the AI100 with Shonit™ device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly list a table of "acceptance criteria" alongside "reported device performance." Instead, it presents the results of various studies and states that "All tests met acceptance criteria." We can infer the acceptance criteria from the context of these results.

    Here's an inferred table based on the "Method Comparison Study" section, which compares the device to manual microscopy:

    MetricAcceptance Criteria (Implied)Reported Device Performance (95% CI)
    WBC Differential (Passing-Bablok Regression)
    Neutrophils (%)Slope CI should include 1, Intercept CI should include 0Slope: 1.024 (1.016, 1.032), Intercept: -1.78 (-2.249, -1.346)
    Lymphocytes (%)Slope CI should include 1, Intercept CI should include 0Slope: 1.025 (1.016, 1.034), Intercept: -0.587 (-0.881, -0.306)
    Eosinophils (%)Slope CI should include 1, Intercept CI should include 0Slope: 1.029 (1.012, 1.05), Intercept: -0.039 (-0.07, -0.01)
    Monocytes (%)Slope CI should include 1, Intercept CI should include 0Slope: 1.083 (1.051, 1.117), Intercept: -0.462 (-0.66, -0.304)
    WBC Abnormalities (Sensitivity, Specificity, Overall Agreement)"Met the acceptance criteria"
    Morphological AbnormalityN/AOverall Agreement: 91.7% (90.4%, 92.8%), Sensitivity: 95.3% (92.8%, 96.7%), Specificity: 90.9% (89.4%, 92.2%)
    Distributional AbnormalityN/AOverall Agreement: 96.4% (95.5%, 97.2%), Sensitivity: 91.0% (86.8%, 93.9%), Specificity: 97.2% (96.3%, 97.9%)
    Overall WBC AbnormalityN/AOverall Agreement: 95.0% (94.0%, 95.9%), Sensitivity: 92.7% (89.2%, 95.0%), Specificity: 95.4% (94.3%, 96.3%)
    RBC Morphologies (Sensitivity, Specificity, Overall Agreement)"Met the acceptance criteria"
    AnisocytosisN/ASensitivity: 91.1% (88.1%, 93.4%), Specificity: 95.9% (94.7%, 96.9%), Overall Agreement: 94.7% (93.6%, 95.7%)
    MacrocytosisN/ASensitivity: 90.7% (87.0%, 93.5%), Specificity: 96.6% (95.5%, 97.4%), Overall Agreement: 95.5% (94.5%, 96.4%)
    PoikilocytosisN/ASensitivity: 96.3% (94.8%, 97.3%), Specificity: 88.1% (85.8%, 90.0%), Overall Agreement: 92.1% (90.7%, 93.2%)
    Platelet Morphologies (Sensitivity, Specificity, Overall Agreement)"Met the acceptance criteria"
    Platelets (Overall)N/ASensitivity: 100% (99.8%, 100%), Specificity: 100% (34.2%, 100%), Overall Agreement: 100% (99.8%, 100%)
    Giant PlateletsN/ASensitivity: 99.1% (98.4%, 99.5%), Specificity: 92.4% (90.3%, 94.1%), Overall Agreement: 96.4% (95.4%, 97.1%)
    Platelet ClumpsN/ASensitivity: 91.6% (89.5%, 93.4%), Specificity: 96.3% (94.9%, 97.3%), Overall Agreement: 94.2% (93.0%, 95.2%)
    Overall PlateletsN/ASensitivity: 97.9% (97.1%, 98.4%), Specificity: 94.6% (92.8%, 95.9%), Overall Agreement: 96.8% (96.0%, 97.4%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Method Comparison Study: A total of 882 samples were collected and analyzed.
      • 298 normal samples
      • 584 abnormal samples
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but the submitter is SigTuple Technologies Pvt. Ltd. from Bangalore, Karnataka, India. The regulatory consulting firm is US-based. The clinical study was conducted across four sites, implying a multi-site study.
      • Retrospective or Prospective: Not explicitly stated, but the phrasing "samples were collected and analyzed" suggests a prospective collection for the purpose of the study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: "two medical reviewers at each site" for the method comparison study. Since there were four sites, this implies a total of 8 medical reviewers.
    • Qualifications of Experts: They were described as "trained qualified reviewers" and "skilled operator, trained in the use of the device and in the review of blood cells." The document specifies that the ground truth review was done by "performing manual microscopy," indicating expertise in manual blood smear analysis.

    4. Adjudication Method for the Test Set

    • The document states that the "stained slides were read by two medical reviewers at each site both on the AI100 with Shonit™ device and manual microscope (reference method)."
    • It does not explicitly describe an adjudication method (e.g., 2+1, 3+1) if the two reviewers disagreed on the ground truth. It seems the comparison was direct, with both reviewers' manual microscopy results forming part of the "reference method."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • A type of MRMC study was performed in the "Method Comparison Study" section, where "two medical reviewers at each site" read the slides. However, this study was primarily designed to compare the device's output (with human verification) to the manual microscopy method (human reference).
    • The study design directly compares the device-assisted reading (where the human operator reviews and verifies the AI's suggestions) against the manual microscopy reference method. It is not designed to quantify the effect size of how much human readers improve with AI vs. without AI assistance for the same human reader. The human readers are presented with the AI's suggestions in one arm and perform traditional manual microscopy in the other.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The document mentions "pre-classified results suggested by the device" and "pre-characterized results suggested by the device" in the repeatability and reproducibility studies (Sections 5.11 and 5.12). This indicates that the algorithm's raw classifications were evaluated in these analytical performance studies.
    • The "principle of operation" also states: "On each FOV image, image processing is applied to extract and classify WBCs, RBCs, and Platelets."
    • However, the clinical performance (method comparison study in Section 5.13) which leads to the substantial equivalence determination, is based on a human-in-the-loop workflow: "A skilled operator, trained in the use of the device and in the review of blood cells, identifies and classifies each cell according to type." and "The device then allows the user to review the identified and classified cells... The user may re-classify cells and add impressions as they deem fit and approve the report."
    • Therefore, while the algorithm's internal performance was evaluated analytically, the regulatory submission for substantial equivalence focuses on the device's performance as a human-in-the-loop system, not a standalone AI.

    7. The Type of Ground Truth Used

    • The ground truth used for the clinical performance study (method comparison) was expert consensus / manual microscopy by skilled operators. Specifically, slides were "read by two medical reviewers at each site both on the AI100 with Shonit™ device and manual microscope (reference method)." The "reference method" from manual microscopy served as the ground truth.

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size used for the training set of the AI. The discussions focus on analytical and clinical validation studies for regulatory clearance.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It mentions the use of "neural network of convolutional type" which implies deep learning, but the specifics of its training data and ground truth labeling are not detailed in this regulatory summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220013
    Manufacturer
    Date Cleared
    2022-05-03

    (119 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    with Slide Loader with Full Field Peripheral Blood Smear (PBS) Application Regulation Number: 21 CFR 864.5260
    Full Field Peripheral Blood Smear
    (PBS) Application |
    | | Regulation Number: | 21CFR§864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The X100HT with Full Field Peripheral Blood Smear (PBS) Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in vitro diagnostic use only. For professional use only.

    Device Description

    X100HT with Full Field Peripheral Blood Smear (PBS) Application automatically locates and presents high resolution digital images from fixed and stained peripheral blood smears. The user browses through the imaged smear to gain high-level general impressions of the sample. In conducting white blood cells (WBC) differential, the user reviews the X100HT with Full Field PBS suggested classification of each automatically detected WBC and may manually change the suggested classification of any cell. In conducting red blood cells (RBC) morphology evaluation, the user can characterize RBC morphology on observed images. In conducting platelets estimation, the user reviews each automatically detected platelet and the suggested platelets estimation and may manually change the detections or the estimation. The X100HT with Full Field PBS enables efficient slide loading by providing three cassettes, each can be loaded with up to ten peripheral blood smear slides. The slide loader automatically adds mounting media and coverslips to the slides and loads them into the X100 for scanning and analysis. The X100HT with Full Field PBS is intended to be used by skilled users, trained in the use of the device and in the identification of blood cells.

    AI/ML Overview

    The provided text describes the regulatory clearance of the Scopio X100HT with Full Field Peripheral Blood Smear (PBS) Application, comparing it to a predicate device (X100 with Full Field PBS Application). While it outlines the device's intended use and the general types of testing performed (software, hardware, EMC, safety), it does not contain explicit details on the acceptance criteria or the specific study results that prove the device meets these criteria for the AI/automation components of the system.

    The document primarily focuses on demonstrating substantial equivalence to a predicate device, particularly highlighting the addition of a 'Slide Loader' and minor software modifications for workflow efficiency, rather than a detailed performance study of the AI's diagnostic capabilities. The core image analysis and AI components ("standard mathematical methods, including deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells") are stated to be "identical" to the predicate device. Therefore, a comprehensive performance study as requested, particularly regarding the AI's diagnostic accuracy against a ground truth and comparative effectiveness with human readers, is not present in this document.

    However, based on the information provided, here's what can be extracted and inferred, with acknowledgments of missing details:


    Acceptance Criteria and Device Performance (Inferred/General)

    Since the core AI/analysis technique is stated to be "identical" to the predicate device, it's implied that the performance of the X100HT (regarding cell classification accuracy, etc.) would be similar to what was demonstrated for the predicate device's clearance. The document focuses on the new functionality (slide loader) and how it does not raise new questions of safety or effectiveness, meaning the existing performance of the analytical portion is presumed acceptable.

    Table 1: Acceptance Criteria and Reported Device Performance

    Performance Metric CategoryAcceptance Criteria (Inferred from Predicate's Clearance, not explicitly stated for X100HT in this doc)Reported Device Performance (Inferred, as core AI is identical to predicate)
    WBC Differential Accuracy(Not explicitly stated for X100HT; performance equivalent to predicate expected)Achieves pre-classified WBC categorization using ANNs, to be reviewed by user.
    RBC Morphology Evaluation Presentation(Not explicitly stated for X100HT)Presents an overview image for examiner characterization.
    Platelet Estimation Accuracy(Not explicitly stated for X100HT; performance equivalent to predicate expected)Automatically locates/counts platelets, provides estimate for user review.
    Functional Equivalence to PredicateThe device's results (images and suggested classifications) are substantively equivalent to the predicate.Stated to be "identical" analysis technology to K201301 predicate.
    Software Functionality (Slide Loader)Integration of slide loader enhances workflow without compromising core analysis or safety.Replaces manual steps of mounting media/coverslipping and slide loading.
    Safety and EMCCompliance with IEC/EN standards for safety and EMC.Successfully passed IEC 60601-1-2, FCC Part 15 Subpart B, IEC 61010-2-101, IEC 61010-1, IEC 62471.

    Details on the Study Proving Device Meets Acceptance Criteria:

    1. Sample Size Used for the Test Set and Data Provenance:

      • Not specified in the provided text. The document states "Verification and validation testing was conducted and documentation was updated," but does not list sample sizes for these tests, nor the origin (country) or nature (retrospective/prospective) of the data. Given the device's classification and the focus on "substantial equivalence," it's possible detailed clinical performance data was not a primary requirement for this 510(k), as the core AI was already cleared.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Not specified in the provided text. The document mentions the device "assists a qualified technologist" and is for "skilled users, trained in the use of the device and in the identification of blood cells," but does not detail the experts used for ground truth generation in any validation studies.
    3. Adjudication Method for the Test Set:

      • Not specified in the provided text.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • Not specified in the provided text. The document emphasizes that the user "reviews the suggested classification" and "may manually change the suggested classification," indicating a human-in-the-loop workflow. However, an MRMC study comparing human performance with and without AI assistance is not described.
    5. Standalone (Algorithm Only) Performance:

      • A standalone performance study of the algorithm's accuracy in classifying cells (without human review/override) is not explicitly detailed in the provided text for the X100HT. The description of the device's function clearly outlines a "pre-classified" stage where the ANN suggests classifications, which are then reviewed and potentially modified by a human user. The performance reported is thus implicitly a human-in-the-loop performance, but the standalone accuracy is not quantified.
    6. Type of Ground Truth Used:

      • Not specified in the provided text. Since the device "pre-classifies" cells, the ground truth for training and validating the ANN would likely involve expert consensus or manual expert classification of blood cells. However, this is not explicitly stated.
    7. Sample Size for the Training Set:

      • Not specified in the provided text. The document mentions "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but the size of the training dataset is not provided.
    8. How the Ground Truth for the Training Set Was Established:

      • Not specified in the provided text. Similar to point 6, it can be inferred that expert classification was used, but the specific process (e.g., number of experts, consensus methods) is not described.

    Summary of Missing Information:

    The provided 510(k) summary focuses almost entirely on demonstrating that the X100HT, with its new slide loader, is substantially equivalent to an already cleared predicate device (K201301). It highlights that the core analytical software and imaging technology responsible for AI-assisted cell classification are "identical" to the predicate. Therefore, details regarding new performance studies for the AI component itself (acceptance criteria, test set sizes, ground truth establishment, MRMC studies) are not present in this document, as the performance aspect of the AI was likely covered in the predicate device's clearance. This document serves to demonstrate that the modifications (primarily the slide loader) do not negatively impact the previously established safety and effectiveness.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200595
    Manufacturer
    Date Cleared
    2020-10-16

    (224 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    20854

    Re: K200595

    Trade/Device Name: CellaVision DC-1, CellaVision DC-1 PPA Regulation Number: 21 CFR 864.5260
    | Automated cell-locating device |
    | Regulation number: | 21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use in clinical laboratories.
    CellaVision® DC-1 is intended to be used by operators, trained in the use of the device.

    Intended use of the Peripheral Blood Application
    The Peripheral Blood Application is intended for differential count of white blood cells (WBC), characterization of red blood cell (RBC) morphology and platelet estimation.
    The CellaVision® DC-1 with the Peripheral Blood Application automatically locates blood cells on peripheral blood (PB) smears. The application presents images of the blood cells for review. A skilled operator trained in recognition of blood cells, identifies and verifies the suggested classification of each cell according to type.

    Device Description

    The CellaVision® DC-1 is an automated cell-locating device intended for in-vitro diagnostic use. CellaVision® DC-1 automatically locates and presents images of blood cells found on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type. CellaVision® DC-1 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells
    The CellaVision® DC-1 consists of a built-in PC with a Solid-State Disc (SSD) containing CellaVision DM Software (CDMS), a high-power magnification microscope with a LED illumination, an XY stage, a proprietary camera with firmware, built-in motor- and illumination LED controller, a casing and an external power supply. It is capable of handling one slide at a time.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and study information for the CellaVision® DC-1 device, based on the provided text:

    Acceptance Criteria and Device Performance

    ParameterAcceptance CriteriaReported Device Performance
    WBC RepeatabilityProportional cell count in percent for each cell class. All tests except repeatability and within laboratory precision on three occasions met acceptance criteria. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria.Overall successful. Three samples displayed variation in mean values between slide 1 and 2 for specific cell types, resulting in a variation slightly above acceptance criteria.
    RBC RepeatabilityAgreement (percentage of runs reporting the grade) for each morphology. Variation in mean values between slide 1 and 2 for specific cell types should not exceed acceptance criteria.Overall successful. One sample displayed variation in mean value between slide 1 and 2 for the specific cell type, resulting in a variation above acceptance criteria.
    PLT RepeatabilityAgreement (percentage of runs reporting the actual PLT level) for each PLT level.Met the acceptance criterion in all samples at all three sites.
    WBC ReproducibilityNot explicitly stated in quantitative terms but implied to be successful based on ANOVA on proportional cell count for each class.Overall successful.
    RBC ReproducibilityNot explicitly stated in quantitative terms but implied to be successful based on agreement (percentage of runs reporting the grade).Overall successful.
    PLT ReproducibilityAgreement (percentage of runs reporting the most prevalent level) for each PLT level.Met the acceptance criterion in all samples. Overall successful.
    WBC Comparison (Accuracy)Linear regression slope: 0.8-1.2. Intercept: ±0.2 for individual cell types with a normal level of >5%.Accuracy evaluations successful in all studies, showing no systematic difference between DC-1 and DM1200. Agreement for Segmented Neutrophils, Lymphocytes, Eosinophils, and Monocytes were all within acceptance criteria.
    WBC Comparison (Distribution & Morphology)Not explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate.Distribution: OA 89.8%, PPA 89.2%, NPA 90.4%.
    Morphology: OA 91.3%, PPA 88.6%, NPA 92.5%.
    RBC ComparisonNot explicitly stated, but PPA, NPA, and OA percentages are reported for comparison to predicate with 95% CI.Color: OA 79.9% (76.4%-82.9%), PPA 87.8% (82.3%-91.8%), NPA 76.3% (71.9%-80.2%).
    Size: OA 88.2% (85.3%-90.6%), PPA 89.8% (86.3%-92.2%), NPA 84.8% (79.0%-89.2%).
    Shape: OA 85.2% (82.0%-87.8%), PPA 87.3% (82.3%-91.0%), NPA 83.8% (79.6%-87.3%).
    PLT ComparisonCohen's Kappa coefficient ≥ 0.6.Weighted Kappa 0.89405 (95% CI: 0.87062 to 0.91748), meeting the acceptance criterion.
    EMC TestingConformity with IEC 61010-1:2010 and IEC 61010-2-101:2015.The test showed that the DC-1 is in conformity with IEC 61010-1:2010 and IEC 61010-2-101:2015.
    Software V&VSoftware verification and validation testing documentation provided as recommended by FDA.Documentation was provided as recommended by FDA's Guidance for Industry and Staff. Software classified as "moderate" level of concern.

    Study Details

    1. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

      • WBC Comparison: 598 samples (1196 slides, A and B slide).
      • RBC Comparison: 586 samples.
      • PLT Comparison: 598 samples.
      • Repeatability and Reproducibility studies:
        • WBC & RBC Repeatability: Five samples at each of three sites.
        • PLT Repeatability: Four samples at each of three sites.
        • WBC Reproducibility: Five samples, five slides prepared from each, processed daily at three sites for five days.
        • RBC & PLT Reproducibility: Four samples, five slides prepared from each, processed daily at three sites for five days.
      • Data Provenance: The studies were conducted in a clinical setting using three different laboratories. The specific country of origin is not mentioned, but the context implies clinical labs. The studies appear to be prospective as samples were processed for the studies.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

      • The document states that a "skilled operator trained in recognition of blood cells" identifies and verifies the suggested classification. For the comparison studies, the predicate device (DM1200) results served as the reference for comparison. While the specific number or qualifications of experts adjudicating the DM1200's results (which serve as the reference standard) are not explicitly stated, the context implies that a medically trained professional would be involved in generating and validating those results. The study design for WBC and RBC comparisons references CLSI H20-A2, which typically outlines protocols for comparison of manual differential leukocyte counts, implying that the "ground truth" (or reference standard) would have been established by trained medical technologists or pathologists according to accepted procedures.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The document describes a comparison study between the CellaVision® DC-1 (test device) and the CellaVision® DM1200 (reference device, K092868). The results from a "skilled operator" using the predicate device were used as the reference data. Therefore, the adjudication method for the test set ground truth (if DM1200's results are considered the ground truth equivalent) is essentially the standard workflow of the predicate device with human operator verification. There is no explicit mention of an adjudication panel for discrepancies between human observers or between the device and a human. The DC-1's proposed classifications are compared against the verified classifications from the DM1200.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study evaluating human readers with AI vs without AI assistance was presented. The study focuses on comparing the new device (DC-1, which uses CNN for pre-classification) against a predicate device (DM1200, which uses ANN for pre-classification), both of which employ AI in a human-in-the-loop workflow. Both systems are "automated cell-locating devices" where a "skilled operator" reviews and verifies the pre-classified cells. The study does not quantify the improvement of a human reader with either AI system compared to a human reader performing a completely manual differential count without any AI assistance.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • No, a standalone (algorithm only) performance study was not done or reported as part of the submission. The device is explicitly intended for "in-vitro diagnostic use in clinical laboratories" where a "skilled operator...identifies and verifies the suggested classification of each cell according to type." The device "pre-classifies" WBCs and "pre-characterizes" RBCs, but the operator's verification is integral to the intended use. The performance data for repeatability and comparability were evaluated on the "preclassified results suggested by the device" (Repeatability) or "verified WBC/RBC results from the DC-1" (Comparison), indicating the involvement of a human in the final assessment for the comparison studies.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • The ground truth for the comparison studies was established by the verified results obtained from the predicate device (CellaVision® DM1200) operated by a skilled professional. This is implied to be equivalent to an expert consensus based on established laboratory practices, as the DM1200 is also a device requiring human verification. For repeatability and reproducibility studies, "preclassified results suggested by the device" or "verified data" were used, again implying that the 'ground truth' for evaluating the DC-1's consistency relies on its own outputs, or validated outputs, rather than an independent expert consensus on raw slides.
    7. The sample size for the training set:

      • The document does not specify the sample size used for training the Convolutional Neural Networks (CNN) for the CellaVision® DC-1. It only mentions that CNNs are used for preclassification and precharacterization.
    8. How the ground truth for the training set was established:

      • The document does not specify how the ground truth for the training set was established. It only states that the CNNs are "trained to distinguish between classes of white blood cells."
    Ask a Question

    Ask a specific question about this device

    K Number
    K201301
    Manufacturer
    Date Cleared
    2020-10-02

    (140 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Device Name: X100 with Full Field Peripheral Blood Smear (PBS) Application Regulation Number: 21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The X100 with Field Peripheral Blood Smear Application is intended to locate and display images of white cells, red cells, and platelets acquired from fixed and stained peripheral blood smears and assists a qualified technologist in conducting a WBC differential, RBC morphology evaluation, and platelet estimate using those images. For in vitro diagnostic use only. For professional use only.

    Device Description

    X100 with Full Field Peripheral Blood Smear Application (Scopio's Full Field PBS) automatically locates and presents high resolution digital images from fixed and stained peripheral blood smears. The user browses through the imaged smear to gain high-level general impressions of the sample. In conducting white blood cells (WBC) differential, the user reviews the suggested classification of each automatically detected WBC, and may manually change the suggested classification of any cell. In conducting red blood cells (RBC) morphology evaluation, the user can characterize RBC morphology on observed images. In conducting platelets estimation, the user reviews each automatically detected platelet and the suggested platelet estimation, and may manually change the detections or the estimation. The X100 with Full Field Peripheral Blood Smear Application is intended to be used by skilled users, trained in the use of the device and in the identification of blood cells.

    AI/ML Overview

    The provided text describes the performance data for the X100 with Full Field Peripheral Blood Smear Application, comparing its results to those achieved by using a manual light microscope, which serves as the reference method.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state pre-defined acceptance criteria for the percentage values (e.g., correlation coefficient, efficiency, sensitivity, specificity). However, it consistently states that "All method comparison testing met acceptance criteria." This implies that the achieved performance met internal or regulatory thresholds. Based on the provided performance data, here's a table:

    Test/MeasurementAcceptance Criteria (Implicitly Met)Reported Device Performance
    WBC Correlation (Deming Regression r)Met Acceptance
    Neutrophil (%)Not explicitly stated98%
    Lymphocyte (%)Not explicitly stated96%
    Monocyte (%)Not explicitly stated95%
    Eosinophil (%)Not explicitly stated98%
    WBC Differential (Efficiency)Met Acceptance
    Morphological AbnormalityNot explicitly stated96.82% (96.12% to 97.43% CI)
    Distributional AbnormalityNot explicitly stated95.75% (94.95% to 96.46% CI)
    Overall WBCNot explicitly stated96.29% (95.77% to 96.76% CI)
    WBC Differential (Sensitivity)Met Acceptance
    Morphological AbnormalityNot explicitly stated85.46% (80.19% to 89.78% CI)
    Distributional AbnormalityNot explicitly stated88.83% (85.94% to 91.31% CI)
    Overall WBCNot explicitly stated87.86% (85.38% to 90.06% CI)
    WBC Differential (Specificity)Met Acceptance
    Morphological AbnormalityNot explicitly stated97.79% (97.16% to 98.31% CI)
    Distributional AbnormalityNot explicitly stated97.43% (96.70% to 98.03% CI)
    Overall WBCNot explicitly stated97.62% (97.16% to 98.02% CI)
    RBC Morphology (Overall Agreement)Met Acceptance
    OverallNot explicitly stated99.77% (99.71% to 99.83% CI)
    Color GroupNot explicitly stated99.49% (99.14% to 99.73% CI)
    Shape GroupNot explicitly stated99.77% (99.68% to 99.84% CI)
    Size GroupNot explicitly stated99.61% (99.36% to 99.78% CI)
    Inclusions GroupNot explicitly stated100.00% (99.93% to 100.00% CI)
    Arrangement GroupNot explicitly stated96.65% (95.52% to 97.57% CI)
    Platelet Estimation (Deming Regression r)Met Acceptance
    Platelets Estimation (10^3/μL)Not explicitly stated94%
    Platelet Estimation (Efficiency)Met Acceptance
    OverallNot explicitly stated94.89% (92.78% to 96.53% CI)
    Platelet Estimation (Sensitivity)Met Acceptance
    OverallNot explicitly stated90.00% (83.51% to 94.57% CI)
    Platelet Estimation (Specificity)Met Acceptance
    OverallNot explicitly stated96.28% (94.11% to 97.82% CI)

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: A total of 645 specimens.
      • 335 specimens were from normal (healthy) subjects.
      • 310 specimens were from subjects with specific disease conditions.
    • Data Provenance:
      • Country of Origin: Not explicitly stated. The study was conducted at "three sites" but their geographical location is not specified.
      • Retrospective or Prospective: Not explicitly stated, but the description "specimens were collected and analyzed at three sites" with slides being "randomly selected, blinded and read" suggests a prospective or at least prospectively designed evaluation of collected samples.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: "two examiners at each site." Since there were three sites, a total of 6 examiners were involved in establishing the ground truth.
    • Qualifications of Experts: The document states that the ground truth was established by "trained examiners" using a "manual light microscope." It also mentions elsewhere that the device is intended for "skilled users, trained in the use of the device and in the identification of blood cells." While specific certifications or years of experience are not detailed, the implication is that these examiners are qualified clinical laboratory professionals adept at manual blood smear analysis.

    4. Adjudication method for the test set

    • The text states, "The slides were randomly selected, blinded and read by two examiners at each site." It does not explicitly mention a formal adjudication method (e.g., 2+1, 3+1 consensus). It simply states that results were compared between the "Test Method" (X100) and the "Reference Method" (manual light microscope). It is implied that the manual readings by the two examiners constituted the reference. There is no information provided about how discrepancies between the two examiners' manual readings (if any) were resolved or if their results were averaged.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not the primary focus described.
    • The study primarily functions as a method comparison study, comparing the device's performance (which assists human readers) directly against the manual light microscope method (the established reference/ground truth).
    • The device "assists a qualified technologist" by locating and displaying images and suggesting classifications. The study evaluates the device-assisted technologist's performance against the manual technologist's performance.
    • Therefore, an "effect size of how much human readers improve with AI vs without AI assistance" is not reported in the context of a dedicated MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No, a standalone (algorithm-only) performance was not described or evaluated.
    • The device's intended use and the study design clearly state that it "assists a qualified technologist." The performance data reflects the combined system of the device and the human user, where the user reviews and can modify the device's suggestions (e.g., "may manually change the suggested classification of any cell," "may manually change the detections or the estimation"). It is a human-in-the-loop system.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • The ground truth was established by expert readings using a manual light microscope. This is effectively expert consensus if the two examiners at each site agreed, or if their readings were somehow combined to form the reference. The manual light microscope process itself is described as the "Reference Method."

    8. The sample size for the training set

    • The document does not specify the sample size for the training set. The performance data provided is solely for the "Method Comparison" study, which used the 645 specimens as the test set.

    9. How the ground truth for the training set was established

    • The document does not provide details on how the ground truth for the training set was established. Since the training set size is not mentioned, neither is the method for its ground truth. However, given that the device's "Analysis Technique" for WBCs uses "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," it is highly probable that the training ground truth was also established by expert classification of blood cells.
    Ask a Question

    Ask a specific question about this device

    K Number
    K182062
    Date Cleared
    2018-10-30

    (90 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Name: Sysmex UD-10 Fully Automated Urine Particle Digital Imaging Device Regulation Number: 21 CFR 864.5260
    Imaging Device

    Common Name: Automated cell-locating device (urine sediments)

    Classification: 21 CFR 864.5260
    |
    | Regulation | 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sysmex® UD-10 Fully Automated Urine Particle Digital Imaging Device for locating, digitally storing and displaying microscopic images captured from urine specimens. The Sysmex® UD-10 locates and presents particles and cellular elements based on size ranges. The images are displayed for review and classification by a qualified clinical laboratory technologist on the Urinalysis Data Manager (UDM). This device is intended for in vitro diagnostic use in conjunction with a urine particle counter for screening patient populations found in clinical laboratories.

    Device Description

    The Sysmex® UD-10 is a medical device that captures images of cells and particles found in urine with a camera and displays the images on a display screen. The displayed data consists of images of individual particles that are extracted from the original captured whole field images. The device sorts urine particle images based on their size into eight groups (Class 1-8). These images are transferred to the UDM (Urinalysis Data Manager), where the operator enters the classification of the particle images based on their visual examination. The classification of the particles by the operator is a designation of what type of particles are observed (e.g., WBCs, RBCs, casts, bacteria).

    AI/ML Overview

    The Sysmex UD-10 is a device for locating, digitally storing, and displaying microscopic images captured from urine specimens. It presents particles and cellular elements based on size ranges for review and classification by a qualified clinical laboratory technologist.

    Here's an analysis of the acceptance criteria and the studies performed:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria values for agreement percentages in the precision, repeatability, and method comparison studies (except for the minimum requirement for overall agreement in reproducibility and repeatability). However, it does provide conclusions based on the results meeting statistical thresholds. The carryover study had an acceptance criterion within ±1.00%.

    Study TypeMetricAcceptance CriteriaReported Device Performance
    ReproducibilityOverall AgreementLower 95% Confidence Limit > 80.9% (minimum requirement)97.9% (95% CI: 95.2%, 99.1%)
    RepeatabilityOverall AgreementLower 95% Confidence Limit > 85.2% (minimum requirement)100.0% (95% CI: 97.8%, 100.0%)
    CarryoverCarryover EffectWithin ±1.00%RBC: 9.82x10^-23% to 4.16x10^-14% (PASS)
    BACT: 0.00% to -2.57x10^-6% (PASS)
    WBC: 1.05x10^-12% to 100.00% (FAIL at one site, but deemed clinically insignificant)
    Method ComparisonOverall Agreement (UD-10 vs. Manual Microscopy)Exceed 85.2% (proposed requirement)92.0% (95% CI: 89.8%, 93.7%)

    2. Sample Size and Data Provenance

    Reproducibility Study:

    • Sample size: 240 evaluations (from 120 samples of abnormal and normal QC material, processed twice a day for a minimum of 5 days).
    • Data provenance: Prospective, U.S. clinical sites (4 sites). Commercially available MAS® UA control material was used.

    Repeatability Study:

    • Sample size: 170 evaluations (from an unspecified number of normal residual urine samples, each assayed in 5 replicates).
    • Data provenance: Prospective, U.S. clinical sites (4 sites). Normal residual urine samples, collected without preservatives.

    Carryover Study:

    • Sample size: Not explicitly stated as a total number of samples, but involved High and Low concentration samples for BACT (4 sites), WBC (3 sites), and RBC (3 sites). Each sample was split into 3 aliquots (3 high, 3 low) and run consecutively. Results are presented for 3 replicates of high and 3 replicates of low samples per parameter per site.
    • Data provenance: Prospective, U.S. clinical sites (4 sites for BACT, 3 for WBC and RBC). Residual urine samples, collected without preservatives.

    Method Comparison Study:

    • Sample size: 746 abnormal and normal urine samples.
    • Data provenance: Prospective, U.S. clinical sites (4 sites). Residual urine samples from daily routine laboratory workload, collected without preservatives.

    3. Number of Experts and Qualifications for Ground Truth

    Reproducibility Study:

    • Number of experts: One technologist per site (total of 4 technologists across 4 sites).
    • Qualifications: "Technologist." No specific experience level is mentioned.

    Repeatability Study:

    • Number of experts: Two technologists per sample per site, who independently reviewed and identified particle images.
    • Qualifications: "Technologist." No specific experience level is mentioned.

    Carryover Study:

    • Number of experts: One technologist per site.
    • Qualifications: "Technologist." No specific experience level is mentioned.

    Method Comparison Study:

    • Number of experts: Two technologists per sample per site. One classified elements on the UD-10, and a second performed visual read using manual light microscopy.
    • Qualifications: "One technologist" and "a second technologist." No specific experience level is mentioned.

    4. Adjudication Method

    Reproducibility, Repeatability, and Carryover Studies:

    • No explicit adjudication method is described. For repeatability, two technologists independently reviewed images, and "Each technologist's results were treated and recorded as an independent observation."

    Method Comparison Study:

    • No explicit adjudication method is described. One technologist used the UD-10, and another used manual microscopy. Their results were compared.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no mention of a formal MRMC comparative effectiveness study in the sense of evaluating how much human readers improve with AI vs. without AI assistance. The study compares the UD-10 device's performance (which incorporates digital imaging and sorting for human review) against manual microscopy. The UD-10 is a digital imaging device, not strictly an 'AI' device in the typical sense of providing automated diagnosis or enhanced AI assistance to human readers for diagnostic interpretation (beyond presenting sorted images). The study is essentially a method comparison between the UD-10-assisted workflow and traditional manual microscopy.

    6. Standalone Performance (Algorithm Only)

    The Sysmex UD-10 is described as a device that "locates and presents particles and cellular elements based on size ranges. The images are displayed for review and classification by a qualified clinical laboratory technologist." This indicates that the device requires human-in-the-loop for classification and is not a standalone diagnostic algorithm. Its performance is implicitly tied to how well technologists can use the displayed images. The "Overall Agreement" metrics in the studies reflect the performance of the system (device + technologist).

    7. Type of Ground Truth Used

    Reproducibility Study:

    • Ground truth: Expected results from commercially available MAS® UA control material (Level 1 and Level 2).

    Repeatability Study:

    • Ground truth: Reference results provided by screening samples with the Sysmex UF-1000i urine analyzer (K070910).

    Carryover Study:

    • Ground truth: Determined by Sysmex UF-1000i results for high and low concentration samples.

    Method Comparison Study:

    • Ground truth: For the initial comparison, manual microscopy was considered the comparative method. For the referee comparison, the Sysmex UF-1000i (K070910) was used as the referee method to evaluate agreement between UD-10 and manual microscopy.

    8. Sample Size for the Training Set

    The document is a 510(k) summary for a medical device that captures and displays images for human review, not an AI/ML algorithm that requires a "training set" in the conventional sense of machine learning model development. Therefore, there is no mention of a training set sample size. The device uses size ranges to sort images, indicating a rule-based or conventional image processing approach rather than a complex AI model that learns from diverse training data.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, this device does not appear to involve a machine learning training set in the way a typical AI algorithm would. Thus, this question is not applicable based on the provided information.

    Ask a Question

    Ask a specific question about this device

    K Number
    K171655
    Date Cleared
    2018-03-02

    (270 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Classification Name | 21 CFR 864.5220, Differential Cell Counter, Class II
    21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The cobas m 511 integrated hematology analyzer is a quantitative, automated analyzer with cell locating capability. It is intended for in vitro diagnostic use by a skilled operator in the clinical laboratory. The system prepares a stained microscope slide from EDTA-anticoagulated whole blood. It utilizes computer imaging to count the formed elements of blood and provide an image-based assessment of cell morphology, which may be reviewed by the operator, and also allows for manual classification of unclassified cells. The instrument reports the following parameters: RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, %NRBC, #NRBC, WBC, %NEUT, #NEUT, %LYMPH, #LYMPH, %MONO, #MONO, %EO, #EO, %BASO, #BASO, PLT, MPV, %RET, #RET, HGB-RET.

    Device Description

    The cobas m 511 system is a fully automated stand-alone hematology analyzer with integrated slide making capability and digital cell imaging. It provides a complete blood count, 5-part differential, and reticulocvte enumeration of samples of whole blood collected in K2 or K3 EDTA. It is designed for high throughput in the clinical laboratory environment.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text.

    Device: cobas m 511 integrated hematology analyzer
    Predicate Devices: Sysmex® XN-Series (XN-10, XN-20) Automated Hematology Analyzer and CellaVision® DM1200 Automated Hematology Analyzer

    It's important to note that the provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to predicate devices rather than directly outlining the acceptance criteria in a traditional sense (e.g., "sensitivity must be > X%, specificity > Y%"). Instead, the document describes performance studies and states that results "met acceptance criteria" or "were found to be acceptable," implying that specific internal acceptance criteria were used for each test. I will extract the performance results where available, and indicate where quantitative acceptance criteria are not explicitly stated but implied to have been met.


    1. Table of Acceptance Criteria and Reported Device Performance

    Note: The document does not explicitly present a table of acceptance criteria. Instead, it presents performance data and states that "acceptance criteria were met." Where quantitative values are provided, they represent the reported device performance which implicitly met the underlying, but unstated, acceptance criteria for FDA clearance.

    Performance CharacteristicAcceptance Criteria (Implied / Stated)Reported Device Performance
    Method Comparison (vs. Predicate Device)Correlation and bias "found to be acceptable for all reportable parameters" based on CLSI EP09-A3 guidelines.(See Table 3 in original document for full details. Below are examples for key parameters.)
    WBC [10³/µL]Implied to be acceptable.Pearson's (r) = 0.999; Intercept = 0.02; Slope = 1.012; Bias at low/high limits: 0.03 - 0.07 [10³/µL], 1.26 - 1.7 [%].
    RBC [10⁶/µL]Implied to be acceptable.Pearson's (r) = 0.974; Intercept = 0.02; Slope = 0.991; Bias at low/high limits: -0.41 - -0.56.
    HGB [g/dL]Implied to be acceptable.Pearson's (r) = 0.970; Intercept = -0.33; Slope = 1.046; Bias at low/high limits: -0.12 - 0.14 [g/dL], 1.35 - 3.08 [%].
    PLT [10³/µL]Implied to be acceptable.Pearson's (r) = 0.973; Intercept = -11.03; Slope = 1.020; Bias at low/high limits: -10.83 - 0.88.
    %NEUT [%]Implied to be acceptable.Pearson's (r) = 0.989; Intercept = 1.62; Slope = 1.012; Bias at low/high limits: 3.06 - 5.21.
    %LYMPH [%]Implied to be acceptable.Pearson's (r) = 0.989; Intercept = -0.23; Slope = 0.977; Bias at low/high limits: -2.66 - -0.81.
    Flagging Capabilities (Clinical Sensitivity & Specificity)For WBC messages (flags) vs. 400-cell reference method: met acceptance criteria for sensitivity and specificity.Sensitivity = 92.9% (118/(118+9))
    Specificity = 96.8% (302/(302+10))
    Precision (Repeatability - within-run)"Repeatability results met their pre-defined acceptance criteria." (Based on CLSI EP05-A3 and H26-A2 standards).(See Table 5 in original document for full details. Examples for WBC & PLT below.)
    WBC [10³/μL] (All samples)Implied to be acceptable.Mean of Sample Means: 12.06; SD: 0.233; %CV: 1.93.
    PLT [10³/μL] (All samples)Implied to be acceptable.Mean of Sample Means: 246.77; SD: 6.749; %CV: 2.73.
    Reproducibility (Total Precision)"Reproducibility for the three (3) levels of DigiMAC3 controls was calculated and found to be acceptable for all sites combined for all reportable parameters." (Consistent with CLSI EP05-A3).(See Table 6 in original document for full details. Examples for WBC & PLT below.)
    WBC [10³/µL] (L1 control)Implied to be acceptable.Mean: 16.93; Total (Reproducibility) SD: 0.404; %CV: 2.39.
    PLT [10³/µL] (L1 control)Implied to be acceptable.Mean: 470.53; Total (Reproducibility) SD: 8.090; %CV: 1.72.
    LinearityDemonstrated to be linear from lower to upper limit; "all results met acceptance criteria." (Based on CLSI EP06-A).(See Table 7 in original document for examples showing "Maximum Absolute Deviation (Relative)" met allowed deviation. e.g., WBC System 1: 0% dev (8%) vs. 0.5% allowed (15%))
    Carryover"Carryover results for the cobas m 511 system met acceptance criteria." (Based on ICSH guidelines and CLSI H26-A2).White Blood Cells: 0.000%
    Red Blood Cells: 0.000%
    Platelets: 0.001%
    Blasts: 0.000%
    Interfering SubstancesNo significant interference effects up to specific concentrations, except for noted thresholds (HGB/HCT with hemolysis, WBC/#LYMPH with lipemia).Unconjugated/Conjugated Bilirubin: No significant effects up to 40 mg/dL.
    Hemolysis: No significant effects up to 1000 mg/dL, except HGB ≥ 672 mg/dL & HCT ≥ 792 mg/dL.
    Lipemia: No significant effects up to 3000 mg/dL, except WBC ≥ 1646 mg/dL & #LYMPH ≥ 2459 mg/dL.
    High WBC/PLT conc.: No significant effects up to 100.2 x 10⁹/uL (WBC) and 1166 x 10³/uL (PLT).
    Specimen Stability"The combined results demonstrated stability for normal and abnormal samples up to or beyond twenty-four (24) hours."Samples stable up to and beyond 24 hours at ambient (15°C-25°C) and refrigerated (2℃-8℃).
    Anticoagulant Comparison"All acceptance criteria were met, demonstrating equivalency of results obtained from samples collected into K2 EDTA and K3 EDTA."Equivalence established.
    Venous and Capillary Blood Method Comparisons"Overall, the data demonstrate comparable results between venous and capillary blood processed on the cobas m 511 system."Comparability established.
    Mode to Mode Analysis"The results were found to be acceptable in that all twenty-six (26) reportable parameters that were evaluated met acceptance criteria."Acceptance criteria met.
    Limit of Blank (LoB), Limit of Detection (LoD), and Limit of Quantitation (LoQ)(Based on CLSI EP17-A2 guidelines).WBC: LoB = 0.05 x 10³/uL, LoD = 0.08 x 10³/uL, LoQ = 0.24 x 10³/uL
    PLT: LoB = 1 x 10³/uL, LoD = 3 x 10³/uL, LoQ = 6 x 10³/uL
    Reference Intervals"Normal reference ranges for adult and pediatric cohorts are consistent with those in the published literature."Established for adult males, adult females, and six pediatric subgroups.

    2. Sample Sizes and Data Provenance

    • Test Set Sample Size:
      • Method Comparison: 1859-1864 samples (exact number varies slightly by parameter, presumably due to valid data points for each). Sample collection lasted "a minimum of two (2) weeks at each of four (4) clinical sites."
      • Flagging Capabilities: 439 samples (for WBC flags).
      • Precision (Repeatability): 144 samples (for primary parameters), with 31-143 individual samples processed for different parameter groups. Total observations were 4436 (for WBC, RBC, HGB, etc.) and 4405 (for PLT, MPV, %NRBC, etc.) derived from 31 consecutive runs.
      • Reproducibility: 120 observations per control level (for 3 control levels per parameter).
      • Linearity: Not explicitly stated as a single number but involved serial dilutions run for 6 replicates (5 for reticulocytes) on multiple systems.
      • Carryover: 12 independent carryover experiments.
      • Interfering Substances: Dose-response experiments using 6 incremental concentration samples for each substance.
      • Specimen Stability: 31 normal samples, 14 abnormal samples.
      • Anticoagulant Comparison: 44 healthy donor samples, 40 residual abnormal samples.
      • Venous and Capillary Blood Method Comparisons: 40 healthy donor samples, 40 residual abnormal capillary samples.
      • Mode to Mode Analysis: Not a specified sample size number, but compared results from closed-tube vs. open-tube modes.
      • LoB, LoD, LoQ: "three (3) individual test days" for each.
    • Data Provenance: The studies were conducted at four (4) clinical sites. The document does not specify the country of origin of these sites but implies they are clinical laboratories. The studies are prospective in nature, as they involve testing samples on the newly developed cobas m 511 system and comparison to predicate devices/reference methods. The samples were "residual whole blood samples" (for some studies) or collected specifically for the studies ("from healthy volunteer donors," "apheresis samples").

    3. Number of Experts and Qualifications for Ground Truth

    • Expert Usage: For Flagging Capabilities, the "400-cell reference method" refers to the combined results from two (2) 200-cell WBC differentials performed by individuals on two (2) separate blood smears. For Carryover, "slides from the LTV serum samples were reviewed by an external hematopathologist to determine cell carryover."
    • Number of Experts: At least two individuals performed the 200-cell WBC differentials for the flagging study. At least one external hematopathologist was used for the carryover study.
    • Qualifications: "Skilled operator in the clinical laboratory" is mentioned as the intended user. For the flagging study, "individuals" are implied to be laboratory professionals trained in WBC differentials. The carryover study explicitly mentions an "external hematopathologist," implying a medical doctor specializing in laboratory hematology, which suggests a high level of expertise.

    4. Adjudication Method for the Test Set

    • Flagging Capabilities: The "400-cell reference method" involved two separate 200-cell differentials. The combination of these two suggests a form of consensus or combined result, but specific adjudication rules (e.g., if discrepancies, a third reader decides) are not detailed. It's presented as a direct summation or combination of two expert reads.
    • Other Studies: For quantitative parameter comparisons, the ground truth for the "reference method" (often the predicate device or a standardized laboratory method) is assumed to be the established truth, and formal adjudication among multiple readers is not explicitly mentioned as being part of the process for the device's numerical outputs.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text for comparing human reader performance with and without AI assistance. The device is an automated analyzer, and its primary comparison is to another automated analyzer (Sysmex XN-Series) and an automated cell locating device that assists a human operator (CellaVision DM1200).
    • The document implies that the cobas m 511 system provides images that may be reviewed by the operator and allows for manual classification of unclassified cells (%NEUT, %LYMPH, etc. are determined by the instrument). The comparison for "Flagging Capabilities" involves the device's flags against a human expert reference method, highlighting the device's ability to trigger review. However, this is not a study measuring the improvement of human readers with AI assistance.
    • Therefore, an effect size of how much human readers improve with AI vs. without AI assistance is not provided as such a study was not the focus of this 510(k) submission.

    6. Standalone Performance Study

    • Yes, standalone performance (algorithm only without human-in-the loop performance) was extensively done. The "Analytical Performance" section (5.1) details multiple studies of the device's performance in measuring various blood parameters independently.
      • Method Comparison: Compares the device's readings against a predicate device.
      • Precision (Repeatability and Reproducibility): Measures the device's consistency.
      • Linearity: Assesses the device's accuracy across its measuring range.
      • Carryover, Interfering Substances, Specimen Stability, LoB/LoD/LoQ: All measure the inherent performance characteristics of the automated analyzer itself.
    • The device "utilizes computer imaging to count the formed elements of blood and provide an image-based assessment of cell morphology," and "reports the following parameters: RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, %NRBC, #NRBC, WBC, %NEUT, #NEUT, %LYMPH, #LYMPH, %MONO, #MONO, %EO, #EO, %BASO, #BASO, PLT, MPV, %RET, #RET, HGB-RET." These are all automated measurements.

    7. Type of Ground Truth Used

    The type of ground truth used varies by study:

    • Method Comparison: The predicate device's measurements (Sysmex XN-Series) served as the comparator/ground truth for quantitative parameters.
    • Flagging Capabilities: A 400-cell reference method which involved expert consensus (two individuals performing 200-cell WBC differentials on separate smears) was used as ground truth for WBC flagging. This represents a type of expert consensus based on microscopy.
    • Precision and Reproducibility: Standardized quality control materials (DigiMAC3 controls) with known values, and repeated measurements of patient samples across ranges.
    • Carryover: Expert review by an external hematopathologist of slides to confirm cell carryover.
    • Linearity, Interfering Substances, Stability, LoB/LoD/LoQ: These studies establish the device's intrinsic performance characteristics, often relative to expected values or reference methods for their respective tests, rather than a single 'ground truth' in the diagnostic sense. For linearity, prepared samples with known (or expected) concentrations are typically used.
    • Reference Intervals: Based on statistical analysis of samples from normal healthy donors and comparison to published literature.

    8. Sample Size for the Training Set

    • The document does not report the sample size for the training set for the AI/computer imaging components. This 510(k) summary focuses on the validation of the final device rather than its development. Details about the training data used for the "proprietary imaging algorithms" are typically considered proprietary and not required in a public 510(k) summary.

    9. How the Ground Truth for the Training Set Was Established

    • Since the training set size and details are not provided, the method for establishing ground truth for the training set is also not discussed in this document. It is highly probable, given the nature of the device, that ground truth for training would involve extensive manual expert review and classification of blood cell images by trained morphologists or hematopathologists.
    Ask a Question

    Ask a specific question about this device

    K Number
    K171315
    Manufacturer
    Date Cleared
    2017-08-01

    (89 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: CellaVision DM96 and DM1200 with Advanced RBC Application Regulation Number: 21 CFR 864.5260
    Usual Name: Automated cell-locating device Classification Name: Automated cell-locating device (21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CellaVision DM1200 with the Advanced RBC Application is an automated cell-locating device, intended for in-vitro diagnostic use.

    The CellaVision DM1200 with the Advanced RBC Application automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.

    The CellaVision DM1200 with the Advanced RBC Application is intended for blood samples that have been flagged as abnormal by an automated cell counter.

    The CellaVision DM1200 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.

    The CellaVision DM96 with the Advanced RBC Application is an automated cell-locating device, intended for in-vitro diagnostic use.

    The CellaVision DM96 with the Advanced RBC Application automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type.

    The CellaVision DM96 with the Advanced RBC Application is intended for blood samples that have been flagged as abnormal by an automated cell counter.

    The CellaVision DM96 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.

    Device Description

    The Advanced RBC Application is substantially equivalent to the RBC functionality included in the predicate DM Systems. It pre-characterizes the morphology of the red blood cells in a sample based on abnormal color, size, and shape (Poikilocytosis). In addition to that, the Advanced RBC Application also pre-characterizes based on different types of Poikilocytosis and on the presence of certain inclusions.

    The DM Systems display the result of the RBC pre-characterization as the percentage of abnormal cells for each morphological characteristic and as an automatically calculated grade (0 - normal through 3 - marked), corresponding to that percentage. It also displays an overview image of the RBC monolayer. The difference between the current RBC functionality and Advanced RBC Application is the analysis technique, which enables the Advanced RBC Application to pre-characterize RBC into 21 morphological characteristics as opposed to the current RBC functionality with 6 morphological characteristics. The cell images are pre-characterized into different groups of morphological characteristics based on size, color, shape and inclusion using segmentation, feature calculation and the deterministic artificial neural networks (ANNs) trained to distinquish between morphology characteristics of red blood cells.

    Another difference is that the red blood cells, pre-characterized by the Advanced RBC Application, can be displayed both in an overview and in individual images on the screen, while the current RBC functionality displays the pre-characterized red blood cells in an overview image only.

    As in the current RBC functionality, the user reviews the overview image and can change the characterization by manually changing the grades for any morphological characteristic. With the Advanced RBC Application, the user can also view individual cells, grouped by morphological characteristic and change the characterization by reclassifying individual cells.

    AI/ML Overview

    The provided text describes the CellaVision DM96 and DM1200 with Advanced RBC Application, an automated cell-locating device. Here's a breakdown of the requested information based on the text:

    1. A table of acceptance criteria and the reported device performance

    The document states that a clinical evaluation was conducted, and the results "met the predefined acceptance criteria" for various metrics. However, the precise quantitative acceptance criteria and the exact reported performance values are not explicitly stated in this summary. The summary only mentions that the results fulfilled the acceptance criteria.

    Metric (Morphology Group)Acceptance CriteriaReported Device Performance
    RBC group SizeOverall Agreement, Positive Percent Agreement (PPA), Negative Percent Agreement (NPA)"fulfilled the acceptance criteria"
    Groups Color, Shape, Inclusions, and clinical significant morphologiesEfficiency, Sensitivity, Specificity"fulfilled the acceptance criteria"
    Individual morphological characteristicsSensitivity, Specificity"fulfilled the target limits"

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample size: The document states, "Samples were collected and tested for RBC characterization on DM96 and DM1200 at different laboratories." However, the exact sample size for the clinical evaluation (test set) is not specified.
    • Data provenance: The samples were collected "from routine workflow from hospital laboratories" and "in accordance with the target patient population, i.e. from samples flagged as abnormal by an automated cell counter." This suggests prospective collection from clinical settings, but specific countries of origin are not mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    For the clinical evaluation, the "manual microscopy (Reference Method)" was used as the primary comparator for most morphology groups. For the "RBC group Size," an "automated cell counter" was used as a "convenient predicate device."

    • The document implies that the ground truth for most RBC characteristics was established by manual microscopy, which would involve human experts. However, the number of experts and their specific qualifications are not provided. The device's intended use states, "The CellaVision DM96/DM1200 with the Advanced RBC Application is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells," which implies that the reference method would also involve such skilled operators.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not specify an adjudication method for establishing the ground truth from manual microscopy or for resolving discrepancies between readers (if multiple readers were used). It simply refers to "manual microscopy (Reference Method)" as the comparator.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not explicitly described in the provided text. The study conducted was a comparison between the automated device (CellaVision Advanced RBC Application) and manual microscopy (Reference Method), not a study evaluating human reader improvement with AI assistance.
    • The device "automatically locates and presents images of blood cells on peripheral blood smears. The operator identifies and verifies the suggested classification of each cell according to type." While the device assists the human, the study's objective was about the equivalence of the device's characterization results to manual microscopy, not the improvement of human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No, a standalone (algorithm only) performance study was not described. The device is explicitly designed for a human-in-the-loop workflow: "The operator identifies and verifies the suggested classification of each cell according to type." Therefore, the evaluation would inherently include this human interaction. The clinical evaluation compared the "Advanced RBC Application installed on CellaVision DM96 and CellaVision DM1200 (Test Methods)" to the manual method, implying system performance with human verification.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used was primarily:

    • Expert consensus from manual microscopy for most RBC morphological characteristics.
    • Automated cell counter results for the "RBC group Size" (Macrocytes, Microcytes, and Anisocytosis), as manual microscopy was deemed "highly difficult, time consuming and thereby impractical" for this group.

    8. The sample size for the training set

    The document does not specify the sample size used for the training set of the deterministic artificial neural networks (ANNs). It only mentions that the ANNs were "trained to distinguish between morphology characteristics of red blood cells."

    9. How the ground truth for the training set was established

    The document does not explicitly state how the ground truth for the training set was established. It mentions that the ANNs were "trained to distinguish between morphology characteristics of red blood cells," implying that labeled data was used for training, but the method of obtaining these labels (e.g., expert annotations, specific pathological confirmation) is not detailed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K130775
    Device Name
    DUET SYSTEM
    Manufacturer
    Date Cleared
    2014-05-09

    (415 days)

    Product Code
    Regulation Number
    866.4700
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Product Code/Regulation No. 5.5

    • Automated cell-locating devices, product code: JOY, Regulation No. 864.5260
      |
      | Product Code | JOY 864.5260
      | JOY 864.5260
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Duet™ System is an automated scanning microscope and image analysis system. It is intended for in-vitro diagnostic use as an aid to the pathologist in the detection, classification and counting of cells of interest based on color, intensity, size, pattern and shape.

    The Duet™ System is intended to:

    1. Detect Hematopoietic cells stained by Giemsa stain, Immunohistochemistry or ISH (with brightfield and fluorescent) prepared from cell suspension.

    2. Detect Amniotic cells stained by FISH (using direct labeled DNA probes for chromosomes X. Y. 13, 18 and 21).

    3. Detect Aneuploidy for chromosomes 3,7. 17 and loss of the 9p21 locus via FISH in Urine specimens from subjects with transitional cell carcinoma of the bladder, probed by the Vysis Urovysion Bladder Cancer Kit.

    4. Detect and quantify chromosome 17 and the HER-2/neu gene via fluorescence in situ hybridization (FISH) in interphase nuclei from formalin-fixed, paraffin embedded human breast cancer tissue specimens, probed by the Vysis® Path Vysion™ HER-2 DNA Probe Kit. The Duet™ is to be used as an adjunctive automated enumeration tool, in conjunction with manual review of the digital image, to assist in determining HER-2/neu gene to chromosome 17 signal ratio.

    5. Qualitatively detect rearrangements involving the ALK gene via fluorescence in situ hybridization (FISH) in formalinfixed paraffin-embedded (FFPE) non-small cell lung cancer (NSCLC) tissue specimens, probed with the Vysis ® ALK Break Apart FISH Probe Kit. The Duer™ is to be used as an adjunctive autonated enumeration with manual review of the digital image. Note: The pathologist should verify the image analysis software application score.

    Device Description

    The Duet™ System is a fully integrated imaging and scanning platform that automates time-consuming and difficult laboratory tasks of slide scanning.

    The Duet™ System workstation integrates a microscope, CCD camera, motorized stage / slide-loader, computer, keyboard, mouse, joystick, monitor and a dedicated software program.

    The Duet™ System is software controlled and includes features such as: acquisition of images, views, editing, relocation, enhancement capabilities, automatic/manual counting and classification, printing, export of images and backups.

    The Duet™ System scans in high resolution cell samples at high speed both in bright light illumination and in fluorescent illumination.

    The Duet™ System suggests classification of the cells according to their morphological features, their staining (Giemsa, IHC) and fluorescent signals, and allows the user to quickly examine the results, correct them as needed and generate a report summarizing the sample's data. The Duet™ system allows combined presentation of morphological and specific staining information of the same cell, for all the cells of the sample.

    AI/ML Overview

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Coefficient of Variation, CV)Reported Device Performance (Mean %CV Range for Individuals Slides)Reported Device Performance (Overall %CV derived from Random Model)
    Positive samples: CV 10%)Below specified goals
    Negative samples, with mean percentage below 4%: CV ≤ 180%Within-run: 0.0% - 86.6% (for negative slides with mean
    Ask a Question

    Ask a specific question about this device

    K Number
    K102778
    Manufacturer
    Date Cleared
    2011-09-16

    (357 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    | |
    | Classification
    Regulation: | 21 CFR 864.5220 and 21 CFR 864.5260
    K102778

    Trade/Device Name: CellaVision® DM 1200 with body fluid application Regulation Number: 21 CFR 864.5260

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DM1200 is an automated system intended for in-vitro diagnostic use.

    The body fluid application is intended for differential count of white blood cells. The system automatically locates and presents images of cells on cytocentrifuged body fluid preparations. The operator identifies and verifies the suggested classification of each cell according to type.

    DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells.

    Device Description

    CellaVision DM1200 with the body fluid application automatically locates and presents images of nucleated cells on cytocentrifuged body fluid preparations. The system suggests a classification for each cell and the operator verifies the classification and has the opportunity to change the suggested classification of any cell.
    The system preclassifies to the following WBC classes: Unidentified, Neutrophils, Eosinophils, Lymphocytes, Macrophages (including Monocytes) and Other. Cells preclassified as Basophils, Lymphoma cells, Atypical lymphocytes, Blasts and Tumor cells are automatically forwarded to the cell class Other.
    Unidentified is a class for cells and objects which the system has pre-classified with a low confidence level.

    AI/ML Overview

    Here's an analysis of the provided text, outlining the acceptance criteria and study details for the CellaVision DM1200 with the body fluid application:


    Acceptance Criteria and Device Performance Study for CellaVision DM1200 with Body Fluid Application

    The CellaVision DM1200 with body fluid application is an automated cell-locating device intended for in-vitro diagnostic use, specifically for the differential count of white blood cells in cytocentrifuged body fluid preparations. The system automatically locates and presents cell images, suggests a classification, and requires operator verification.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state pre-defined "acceptance criteria" as pass/fail thresholds for accuracy or precision. Instead, it presents the results of a method comparison study between the CellaVision DM1200 (Test Method) and its predicate device, CellaVision DM96 (Reference Method), for various leukocyte classifications. The implied acceptance criteria are that the DM1200 demonstrates comparable accuracy and precision to the predicate device.

    Cell ClassAcceptance Criteria (Implied: Comparable to Predicate)Reported Device Performance (CellaVision DM1200 vs. DM96)
    Accuracy (Regression Analysis: DM1200 = Slope * DM96 + Intercept)
    NeutrophilsRegression where slope is close to 1 and intercept close to 0, with high R²y = 0.9969x + 0.0050, R² = 0.9932 (95% CI Slope: 0.9868-1.0070, 95% CI Intercept: 0.0004-0.0096)
    LymphocytesRegression where slope is close to 1 and intercept close to 0, with high R²y = 0.9815x + 0.0016, R² = 0.9829 (95% CI Slope: 0.9656-0.9973, 95% CI Intercept: -0.0049-0.0081)
    EosinophilsRegression where slope is close to 1 and intercept close to 0, with high R²y = 1.1048x - 0.0002, R² = 0.9629 (95% CI Slope: 1.0782-1.1314, 95% CI Intercept: -0.0007-0.0003)
    MacrophagesRegression where slope is close to 1 and intercept close to 0, with high R²y = 1.0067x - 0.0050, R² = 0.9823 (95% CI Slope: 0.9901-1.0232, 95% CI Intercept: -0.0125-0.0024)
    Other cellsRegression where slope is close to 1 and intercept close to 0, with high R²y = 0.9534 + 0.0032, R² = 0.9273 (95% CI Slope: 0.9207-0.9861, 95% CI Intercept: -0.0002-0.0065)
    Precision/Reproducibility (Short-term Imprecision)
    NeutrophilsSD % comparable between test and reference methodTest Method: Mean % 32.0, SD % 3.2; Reference Method: Mean % 31.6, SD % 3.4
    LymphocytesSD % comparable between test and reference methodTest Method: Mean % 30.1, SD % 5.6; Reference Method: Mean % 30.5, SD % 5.7
    EosinophilsSD % comparable between test and reference methodTest Method: Mean % 0.6, SD % 0.7; Reference Method: Mean % 0.5, SD % 0.6
    MacrophagesSD % comparable between test and reference methodTest Method: Mean % 35.3, SD % 5.8; Reference Method: Mean % 35.5, SD % 6.2
    Other cellsSD % comparable between test and reference methodTest Method: Mean % 2.1, SD % 1.7; Reference Method: Mean % 1.9, SD % 2.5

    The conclusion states that the short-term imprecision was found to be equivalent for the test method and the reference method, and the accuracy results (high R-squared values, slopes close to 1, and intercepts close to 0) demonstrate substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 260 samples.
      • CSF: 62 samples
      • Serous fluid: 151 samples
      • Synovial fluid: 47 samples
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but samples were collected from "two sites." Given the submitter is in Sweden, and the regulatory contact is in the USA, it's unclear if these sites were in Sweden, the USA, or elsewhere.
      • Retrospective or Prospective: Not explicitly stated, but the description "collected from two sites" and then analyzed suggests a prospective collection or at least fresh samples for the study.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated for the establishment of ground truth for the test set's initial classifications. However, the document mentions:
      • "The results were then verified by skilled human operators." This indicates human review post-analysis by both the test and reference methods.
      • The "Intended Use" section states: "DM1200 is intended to be used by skilled operators, trained in the use of the device and in recognition of blood cells."
    • Qualifications of Experts: "Skilled human operators, trained in the use of the device and in recognition of blood cells." No specific professional qualifications (e.g., "radiologist with 10 years of experience") are provided.

    4. Adjudication Method for the Test Set

    The document states: "The results were then verified by skilled human operators." It does not specify a multi-reader adjudication method like 2+1 or 3+1. It implies a single operator verification for each result generated by both the test and reference methods.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance was not explicitly described. This study was a method comparison between two devices (one with the new application, one being the predicate) with human verification. The device's function is to suggest classifications, which implies an assistive role, but the study design was not an MRMC study comparing human performance with and without AI.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance was implicitly done. The "Test Method" (CellaVision DM1200) "suggests a classification for each cell," meaning the algorithm performs an initial classification without human intervention. The reported accuracy metrics ($R^2$, slope, intercept) compare these suggested classifications to those obtained by the reference method (DM96, which also involves algorithmic preclassification). However, the study concludes with human verification of these results, suggesting the standalone performance without the "human-in-the-loop" step described in the intended use is not the final reported performance. The "Accuracy results" table (Table 3.3) and "Precision/Reproducibility" table (Table 3.4) reflect the device's performance before the final human verification step that might change classifications.

    7. Type of Ground Truth Used

    The ground truth used for the comparison was established by the predicate device (CellaVision DM96) with its own human verification, after undergoing "a 200-cell differential count... with both the test method and the reference method. The results were then verified by skilled human operators." Therefore, it's a form of expert-verified reference measurement. It is not pathology, or outcomes data.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set used to develop the CellaVision DM1200's classification algorithms. It mentions "deterministic artificial neural networks (ANN's) trained to distinguish between classes of white blood cells," but no details about the training data are provided within this summary.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly describe how the ground truth for the training set was established. It states that the ANNs were "trained to distinguish between classes of white blood cells," implying that a labeled dataset was used for training, but the process of creating these labels (e.g., expert consensus, manual review) is not detailed in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3