Search Filters

Search Results

Found 8 results

510(k) Data Aggregation

    K Number
    K012372
    Device Name
    SYSMEX UF-50
    Manufacturer
    Date Cleared
    2001-09-28

    (64 days)

    Product Code
    Regulation Number
    864.5200
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sysmex UF-50 is a fully automated urine cell analyzer intended for in vitro diagnostic use in urinalysis within the clinical laboratory. The UF-50 replaces microscopic review of normal/abnormal specimens and flags specimens containing certain abnormalities which indicate the need for further testing. Laboratorians are responsible for final microscopic review of flagged abnormalities.

    Device Description

    The UF-50 is a fully automated urine cell analyzer for urinalysis in clinical laboratories. It analyzes formed elements in urine using flow cytometry technology.

    AI/ML Overview

    The provided document describes the Sysmex UF-50, a fully automated urine cell analyzer. The document focuses on its substantial equivalence to a predicate device, the Sysmex UF-100.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state numerical acceptance criteria for the UF-50. Instead, the primary "acceptance criterion" is demonstrated by establishing substantial equivalence to the predicate device, the Sysmex UF-100. The reported device performance is that the "correlation results performance correlated to those of the two analyzers, therefore supporting the claim of substantial equivalence."

    Since exact numerical criteria are not given, a table would not be directly comparable. However, the qualitative performance is aligned with the predicate device.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The document does not specify the exact sample size for the correlation studies. It generally refers to "correlation studies."
    • Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or the country of origin.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not explicitly provided. Given that the study focuses on correlating the UF-50 to a predicate device (UF-100) and that the UF-50 flags specimens for "final microscopic review of abnormalities" by laboratorians, it's highly probable that a "ground truth" might have involved microscopic review by qualified laboratorians, but the number and qualifications are not detailed.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any specific adjudication method for establishing ground truth, as the primary comparison is between the UF-50 and UF-100.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader multi-case comparative effectiveness study was not done. The study compares the UF-50 (an automated analyzer) to another automated analyzer (UF-100), not human readers with and without AI assistance.
    • Effect Size: Not applicable, as no such study was performed. The device is intended to flag specimens for human review, not to improve human reader performance directly.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone study of the algorithm's performance was done. The "Clinical Performance Data" section describes "Correlation studies were performed to evaluate the equivalency of the UF-50 performance compared to the predicate device, the UF-100." This indicates that the UF-50's performance was evaluated independently, comparing its output directly to the predicate device's output. The device itself is described as a "fully automated urine cell analyzer" performing "in vitro diagnostic use," implying standalone operation to generate results that are then compared.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the correlation studies appears to be the results obtained from the predicate device, the Sysmex UF-100. The study "evaluate[d] the equivalency of the UF-50 performance compared to the predicate device, the UF-100." Therefore, the UF-100's performance served as the reference for determining the UF-50's equivalency.

    8. The sample size for the training set

    The document does not describe a "training set" in the context of machine learning. The UF-50 is described as using "flow cytometry technology," which is a well-established analytical method, not typically associated with machine learning training sets in the same way an AI algorithm might be.

    9. How the ground truth for the training set was established

    Not applicable, as no machine learning training set is described.

    Ask a Question

    Ask a specific question about this device

    K Number
    K992875
    Manufacturer
    Date Cleared
    1999-11-09

    (75 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sysmex™ XE-2100 is a multi-parameter hematology analyzer intended to classify the following formed elements in anti-coagulated blood: WBC, Neut%/#, Lymph%/#, Mono%/#, Eo%/#, Baso%/#, NRBC%/#, RBC, HGB, HCT, MCV, MCH, MCHC, RDW-CV, RDW-SD, RET%/#, IRF, HFR*, MFR*, LFR*, PLT, MPV, PDW*, P-LCR*, PCT* (*Not Reportable in USA).

    Device Description

    The XE-2100 is an automated hematology analyzer which consists of four principal units: (1) Main Unit which aspirates, dilutes, mixes, and analyzes whole blood samples; (2) Sampler Unit which supplies samples to the Main Unit automatically; (3) IPU (Information Processing Unit) which processes data from the Main Unit and provides the operator interface with the system; (4) Pneumatic Unit which supplies pressure and vacuum from the Main Unit.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the Sysmex™ Automated Hematology Analyzer XE-2100, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state numerical acceptance criteria for the performance of the Sysmex™ XE-2100. Instead, it relies on demonstrating substantial equivalence to predicate devices (Sysmex SE/RAM-1 and SF-3000) and established methods (manual differential, flow cytometry).

    The reported device performance is described as "performance to manufacturer specifications" for carryover, precision, linearity, and sample stability. For other parameters, performance is considered "similar to the predicate devices."

    Feature/ParameterAcceptance Criteria (Implied)Reported Device Performance
    Overall DeviceSubstantial equivalence to predicate devices (SE/RAM-1, SF-3000), manual differential, and flow cytometry.Performance claims are similar to predicate devices. Supports substantial equivalence.
    CarryoverManufacturer specificationsPerformance met manufacturer specifications
    PrecisionManufacturer specificationsPerformance met manufacturer specifications
    LinearityManufacturer specificationsPerformance met manufacturer specifications
    Sample StabilityManufacturer specificationsPerformance met manufacturer specifications
    Correlation Studies (All parameters listed for XE-2100)Results expected to be similar to predicate devices for relevant parameters.Specimens from healthy individuals and pathological conditions showed similar results to predicate devices.
    WBC DifferentialCorrelation with manual differentials (NCCLS H20A) and flow cytometry.Correlated to results from manual differentials (NCCLS H20A) and flow cytometry.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not explicitly state the specific number of samples used in the correlation studies. It mentions that "specimens were evaluated from apparently healthy individuals and from patients with different pathological conditions." This implies a varied and likely representative sample, but the exact count is not given.
    • Data Provenance: The document does not state the country of origin of the data. The studies appear to be prospective as they were conducted to evaluate the performance of the XE-2100 against predicate devices and established methods.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: The document does not specify the number of experts used.
    • Qualifications of Experts: The document states that the WBC differential of the XE-2100 was correlated to results from "manual differentials performed according to NCCLS H20A." This implies that the manual differentials, which serve as a form of ground truth, were performed by trained laboratory professionals, adhereing to a recognized standard (NCCLS H20A). No specific experience level (e.g., radiologist with 10 years experience) is mentioned for these individuals.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (like 2+1 or 3+1) for establishing ground truth. The comparison for WBC differential was made against "manual differentials performed according to NCCLS H20A and to flow cytometry." This suggests a direct comparison to established methods rather than a consensus-based adjudication by multiple independent experts of the device's output.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study is mentioned in the provided text. The evaluation focuses on the device's performance against predicate devices and established laboratory methods, not on how the device assists human readers.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, the study described is a standalone performance evaluation. The Sysmex XE-2100 is an "Automated Hematology Analyzer," and the studies (carryover, precision, linearity, sample stability, and correlation studies) evaluate the device's output directly against reference methods or predicate devices, without human-in-the-loop interaction as part of the primary performance assessment.

    7. Type of Ground Truth Used

    The ground truth used in the studies includes:

    • Established Laboratory Methods:
      • Manual differentials performed according to NCCLS H20A for WBC differential.
      • Flow cytometry for WBC differential.
    • Predicate Devices: The Sysmex SE/RAM-1 and SF-3000 served as a comparative reference for many parameters. While not strictly "ground truth," their established performance was used as a benchmark for substantial equivalence.

    8. Sample Size for the Training Set

    The document does not provide any information about a training set or its sample size. This is typical for devices of this era, especially those demonstrating equivalence to predicate devices, where extensive AI/machine learning training sets as understood today were not a primary requirement for 510(k) submission. The device's operation is based on established analytical principles (e.g., flow cytometry, DC detection).

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned, there is no information on how its ground truth would have been established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K981950
    Manufacturer
    Date Cleared
    1998-11-03

    (153 days)

    Product Code
    Regulation Number
    864.5200
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the Sysmex R-3500 is as a fully automated reticulocyte analyzer for in vitro diagnostic use in clinical laboratories.

    Device Description

    The Sysmex™ R-3500 is an automated reticulocyte analyzer intended for in vitro use in clinical laboratories. The R-3500 provides accurate and precise test results for 8 analysis parameters in whole blood. These include RET%, RET#, RBC, IRF, LFR, MFR, HFR, and PLT. The R-3500 processes approximately 120 samples per hour and displays and prints the data for Reticulocyte number, Reticulocyte percent, Red blood cell count, Immature reticulocyte fraction, fluorescent ratios, and platelets along with representative scattergrams. Sample abnormalities are indicated by abnormal marks, flags, and error messages which appear on the DMS display screen and on the printout. This is an indication that the sample is not within the acceptable range and requires further review and investigation. The R-3500 uses the principle of flow cytometry for reticulocyte analysis. In the instrument, a whole blood sample is automatically aspirated, diluted and stained with a fluorescent dye (Auromine-O). The sample is hydrodynamically focused into a narrow path and passed through a flow cell, where it is illuminated by an Argon laser beam. The cells present in the sample will fluoresce and scatter light to varying degrees. It is the analysis of the intensity of emitted fluorescent light and intensity of scattered light which allows the R-3500 analyzer to detect and enumerate reticulocytes.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Sysmex™ Automated Reticulocyte Analyzer R-3500 based on the provided text:

    1. Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined quantitative acceptance criteria (e.g., "RET% must have an r-value > 0.990"). Instead, it demonstrates the device's performance through correlation studies comparing it to a predicate device. The implicit acceptance criterion is "substantial equivalence" to the predicate device, which is supported by high correlation coefficients (r and r²) and regression equations.

    ParameternrRegression EquationReported Device Performance (Correlation with Predicate Device)
    RET#4870.9940.988y = 0.965x + 0.001Very strong positive correlation
    RET%4870.9970.994y = 0.964x + 0.051Very strong positive correlation
    RBC4860.9980.997y = 1.009x - 0.072Very strong positive correlation
    IRF4860.9560.913y = 0.948x + 1.409Strong positive correlation
    LFR4860.9560.913y = 0.948x + 3.819Strong positive correlation
    MFR4860.9230.852y = 0.917x + 1.433Strong positive correlation
    HFR4860.9540.910y = 0.940x + 0.490Strong positive correlation
    Platelet4820.9940.989y = 0.937x + 10.619Very strong positive correlation

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The sample size for the test set varied slightly by parameter:
      • RET# & RET%: 487 samples
      • RBC, IRF, LFR, MFR, HFR: 486 samples
      • Platelet: 482 samples
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but the "research center" where the studies were performed is mentioned. The manufacturer is TOA Medical Electronics Co. in Kobe, Japan, and the importer/distributor is Sysmex™ Corporation in Long Grove, IL, USA. Given the context of seeking FDA clearance, it's likely the studies were conducted to satisfy US regulatory requirements, but the specific geographic origin of the patient samples is not provided.
      • Retrospective or Prospective: The text states, "In these studies, the following comparative performance evaluations were conducted using the proposed device and the predicate device to evaluate specimens from apparently healthy individuals and from patients with different pathological conditions which are expected to affect the results for particular parameters." This suggests the samples were collected and then tested on both devices for comparison, which aligns with typical prospective or concurrent comparison study methodology. It doesn't indicate purely retrospective analysis of existing data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not mention the use of experts to establish a "ground truth" for the test set. The study compares the performance of the new device (R-3500) against an already cleared predicate device (RAM-1). The predicate device's measurements are effectively treated as the reference for comparison.

    4. Adjudication Method for the Test Set

    Not applicable. Since the comparison is primarily device-to-device measurements, there is no mention of human adjudication for the test set results. Each device generated its own results, which were then statistically compared.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. This is not an AI/algorithm-assisted human reading device. It's a fully automated analyzer. The study focuses on the comparison between two automated instruments.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the study described is a standalone performance evaluation. The Sysmex™ R-3500 is an automated reticulocyte analyzer. The performance data presented (correlation coefficients and regression equations) represent the device's measurements compared directly to those of a predicate automated device. There is no human intervention in the generation of the results being compared.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The "ground truth" in this context is established by the measurements from the predicate device, Sysmex™ SE/RAM-1. The study's objective is to demonstrate substantial equivalence, meaning the new device performs comparably to a legally marketed device.

    8. The Sample Size for the Training Set

    The document does not provide information about a "training set" or its sample size. This type of 510(k) submission for a diagnostic analyzer typically focuses on demonstrating the performance of the final, released device compared to a predicate, rather than detailing the development and training phases of an algorithm.

    9. How the Ground Truth for the Training Set was Established

    Not applicable, as no training set information is provided in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K981761
    Manufacturer
    Date Cleared
    1998-08-14

    (87 days)

    Product Code
    Regulation Number
    864.5200
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the Sysmex KX-21 is as an automated cell counter for in vitro diagnostic use in clinical laboratories.

    Device Description

    The Sysmex™ KX-21 is an automated blood cell counter intended for in vitro diagnostic use in clinical laboratories.

    AI/ML Overview

    The Sysmex™ Automated Hematology Analyzer KX-21's acceptance criteria and performance are detailed through correlation studies against a predicate device, the Sysmex™ K-1000. The study aimed to demonstrate substantial equivalence by comparing the results of various hematological parameters.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document implicitly uses the correlation coefficient (r) and coefficient of determination (r²) as the acceptance criteria for substantial equivalence to the predicate device. A high 'r' value (close to 1) indicates good correlation. While explicit numerical thresholds for acceptance (e.g., r > 0.95) are not stated, the reported values are presented as evidence of meeting performance claims.

    ParameternExpected Acceptance Criteria (Implicit: High r/r²)Reported Device Performance (r)Reported Device Performance (r²)Regression Equation
    WBC194High (close to 1.0)0.9900.981y = 0.944x + 0.265
    RBC195High (close to 1.0)0.9960.993y = 1.002x + 0.053
    HGB195High (close to 1.0)0.9970.995y = 0.979x + 0.405
    HCT195High (close to 1.0)0.9960.992y = 0.989x + 1.025
    MCV195High (close to 1.0)0.9970.995y = 0.95x + 4.846
    MCH195High (close to 1.0)0.9670.934y = 0.989x + 0.297
    MCHC195High (close to 1.0)0.8290.688y = 0.813x + 5.935
    Platelet193High (close to 1.0)0.9960.993y = 1.079x - 12.881
    Lymph%150High (close to 1.0)0.9950.991y = 1.017x - 0.072
    Mixed%150High (close to 1.0)0.8740.763y = 1.055x + 1.35
    Neut%150High (close to 1.0)0.9840.968y = 1.011x - 0.292
    RDW-SD198High (close to 1.0)0.9760.953y = 0.942x + 5.96
    MPV179High (close to 1.0)0.9610.924y = 0.947x + 0.611

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The sample sizes vary by parameter:
      • WBC: 194
      • RBC, HGB, HCT, MCV, MCH, MCHC: 195
      • Platelet: 193
      • Lymph%, Mixed%, Neut%: 150
      • RDW-SD: 198
      • MPV: 179
    • Data Provenance: The study evaluated "specimens from apparently healthy individuals and from patients with different pathological conditions." The country of origin is not explicitly stated, but the manufacturer is based in Kobe, Japan, and the importer/distributor is in Long Grove, IL, USA. The study design is prospective in the sense that specimens were evaluated using both devices for comparison.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document. The study relies on correlation with a predicate device (Sysmex™ K-1000) rather than a separate expert-established ground truth.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided. As the study focuses on device-to-device correlation, an adjudication method for human-interpreted results is not applicable.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    A multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This study is about the performance of an automated hematology analyzer, not about human reader performance or AI assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    Yes, a standalone performance study was implicitly done in the form of correlation studies comparing the KX-21 to the K-1000. These are automated devices; their performance is measured directly, not in conjunction with human interpretation for the primary parameters.

    7. The type of ground truth used:

    The "ground truth" for the test set was the measurements obtained from the Sysmex™ K-1000 predicate device. The study aims to show that the new device (KX-21) produces results that correlate strongly with the established device (K-1000).

    8. The sample size for the training set:

    The document does not specify a separate training set. The study appears to be a performance evaluation against a predicate, where the algorithms of both the KX-21 and K-1000 are presumably already "trained" or designed.

    9. How the ground truth for the training set was established:

    As no separate training set is explicitly mentioned for the KX-21's development or the K-1000, this information is not provided. The ground truth for this correlation study is the predicate device's output.

    Ask a Question

    Ask a specific question about this device

    K Number
    K971736
    Manufacturer
    Date Cleared
    1997-08-26

    (106 days)

    Product Code
    Regulation Number
    864.5220
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The IRF parameter of the R-Series analyzers is intended for in-vit-vouse in --clinical laboratories. Its clinical use is to monitor bone marrow suppression and recovery in terms of erythropoiesis in situations of previous cancer chemotherapeutic bone marrow suppression and bone marrow transplantation. Qualified laboratory personnel are responsible for review of all abnormal results.

    Device Description

    The R-1000, R-3000 and RAM-1 are table-top analyzer systems for automated reticulocyte counting. These instruments are dedicated flow cytometers which dilute and stain whole blood with a fluorescent dye (Auromine-O), then count and measure fluorescence and scatter of stained blood cells. The fluorescence intensity is measured, and the analyzer identifies the reticulocytes based on fluorescence and scatter. The fluorescent intensity of the reticulocytes is displayed on the analyzer as a scattergram, and this display is separated into three regions: Low Fluorescence Intensity (LFR), Middle Fluorescence Intensity (MFR), and High Fluorescence Intensity (HFR). These regions are reported as a ratio or percentage(sum = 100%), in addition to the analyzer's reporting total reticulocyte count and RBC count. The IRF parameter is determined by the sum of the MFR (Middle Fluorescence Ratio) and HFR (High Fluorescence Ratio) {MFR+HFR=IRF}.

    AI/ML Overview

    Here's an analysis of the provided 510(k) summary regarding the Sysmex™ IRF parameter, focusing on acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes the intended use and performance of the Sysmex™ IRF parameter by comparing it to predicate methodologies. However, it does not explicitly state specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) for the IRF parameter itself. Instead, it makes a general claim of similarity or superiority to existing methods.

    Acceptance Criteria (Not explicitly stated numerically in the document)Reported Device Performance
    Predictive indicator of bone marrow suppression and recovery for erythropoiesis in patients undergoing chemotherapy or bone marrow transplantation, similar or superior to predicate methods (WBC, ANC, reticulocyte count)."The Immature Reticulocyte Fraction is similar to WBC and ANC, and similar or better than reticulocyte count as a monitor for bone marrow production." "Clinical studies have been performed which support this claim for patients receiving myeloablative chemotherapy." "In addition, several clinical studies have been published which support this claim for patients having undergone bone marrow transplantation."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document does not specify the exact sample size for the test set used in the reported clinical studies. It vaguely mentions "clinical studies."
    • Data Provenance: The document does not explicitly state the country of origin for the data or whether the studies were retrospective or prospective. It only refers to "clinical studies" and "published clinical studies."

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    The document does not provide information on the number of experts used or their qualifications to establish ground truth for the test set. Given the nature of the parameter (monitoring bone marrow activity), the "ground truth" would likely be derived from established clinical markers and patient outcomes rather than expert image interpretation.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The device (IRF parameter) is an automated laboratory test, not an imaging device requiring human reader interpretation. Therefore, a study to assess human reader improvement with AI assistance is not applicable.
    • Effect size of human readers improving with AI vs. without AI assistance: Not applicable.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes, the reported performance is inherently a standalone (algorithm only) performance. The Sysmex R-Series analyzers automatically calculate the IRF parameter. The clinical studies mentioned evaluate the predictive value of this automated parameter in monitoring bone marrow activity. The device is not designed as an AI assistant for human interpretation but rather as a direct measurement tool.

    7. Type of Ground Truth Used

    The ground truth for evaluating the IRF parameter's effectiveness appears to be:

    • Clinical Outcomes/Established Clinical Markers: The ground truth is implied to be actual bone marrow suppression and recovery, as measured by established clinical indicators like WBC, ANC, and reticulocyte counts, and ultimately, patient outcomes in terms of erythropoiesis in the context of chemotherapy and bone marrow transplantation.

    8. Sample Size for the Training Set

    The document does not provide information on the sample size used for any training set. As this is an older 510(k) (1997) for a parameter derived from established flow cytometry principles (measuring fluorescence intensity), it's highly probable that traditional algorithms were used, not machine learning or AI models in the modern sense requiring extensive training sets with labeled ground truth in the same way. The device's underlying technology relies on physical measurements and thresholds.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how ground truth was established for any training set. As mentioned, the technology likely predates modern AI training methodologies. The parameters (LFR, MFR, HFR, IRF) are based on the intrinsic fluorescent properties of reticulocytes at different maturation stages, and the "ground truth" for identifying these populations would have been established through extensive research into flow cytometry and reticulocyte biology. The "training" would have involved optimizing physical parameters and thresholds within the instrument's design based on known biological characteristics, rather than a data-driven machine learning approach.

    Ask a Question

    Ask a specific question about this device

    K Number
    K964946
    Manufacturer
    Date Cleared
    1997-03-18

    (98 days)

    Product Code
    Regulation Number
    864.5425
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sysmex® CA-1000 & CA-5000 are intended for use as automated blood plasma coagulation analyzers.

    Device Description

    Automated blood plasma coagulation analyzers. The systems were described in details in premarket notifications, document control numbers: K931149/A, K933886, K942096/S1 and K942097/S1. The devices belong to the same family of instruments and they are equivalent in their technological features and performance.

    AI/ML Overview

    The acceptance criteria and device performance information for the Sysmex® Automated Coagulation Analyzer CA-1000 and CA-5000 are detailed below, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes a comparison between new software versions (24 for CA-1000, 13 for CA-5000) and previous software versions or predicate devices. The acceptance criteria are implicitly good correlation and agreement between the new and predicate devices/software, indicated by high correlation coefficients and regression equations close to Y=X.

    TestAcceptance Criteria (implicit)Reported Device Performance (r)Regression Equation
    Prothrombin Time (PT), secondsHigh correlation (e.g., r > 0.95)0.999Y = 0.97X + 0.1
    Activated Partial Thromboplastin Time (APTT), secondsHigh correlation (e.g., r > 0.95)0.978Y = 0.99X + 0.7
    Fibrinogen (Clauss), mg/dLHigh correlation (e.g., r > 0.95)0.995Y = 0.96X + 10.4
    Derived FibrinogenHigh correlation (e.g., r > 0.95)0.951Y = 1.02X - 9.8
    Factor VII AssayHigh correlation (e.g., r > 0.95)0.996Y = 0.95X + 1.0
    Factor VIII AssayHigh correlation (e.g., r > 0.95)0.995Y = 0.92X + 0.6
    Thrombin TimeHigh correlation (e.g., r > 0.95)0.951Y = 1.00X - 0.3

    The acceptance criteria are inferred based on the strong correlations and regression equations presented, indicating that the new software versions perform comparably to the established predicate devices/software.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The sample sizes for each test ranged from 19 to 49 samples, as reported in the "Sample Number (n)" column of the table. Specifically:
      • PT: 39 samples
      • APTT: 37 samples
      • Fibrinogen (Clauss): 41 samples
      • Derived Fibrinogen: 19 samples
      • Factor VII Assay: 49 samples
      • Factor VIII Assay: 38 samples
      • Thrombin Time: 42 samples
    • Data Provenance: The samples consisted of "plasma samples with each representative analyte of the core coagulation assays." This group "represented approximately even numbers of males and females, consisted of approximately 40 samples." The study evaluated specimens from "apparently healthy individuals and from patients with different pathological conditions." The document does not specify the country of origin or whether the data was retrospective or prospective, only that it was a "clinical study."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    This type of device is an automated coagulation analyzer. Ground truth is established by comparing the results of the new device/software against the results of a predicate device or established laboratory methods. Therefore, the concept of "experts" establishing ground truth in the traditional sense (e.g., radiologists interpreting images) is not directly applicable. The "ground truth" here is the measurement obtained by the predicate device or a recognized standard method.

    4. Adjudication Method for the Test Set

    Adjudication methods (e.g., 2+1, 3+1) are typically used for subjective assessments where multiple human interpreters might disagree. For an automated coagulation analyzer, the "ground truth" (or reference standard) is typically derived from laboratory measurements using an established method or a predicate device. Therefore, no multi-expert adjudication method is relevant or described for this type of test.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, an MRMC comparative effectiveness study was not done. This study focuses on the performance comparison between the proposed device/software and a predicate device/software, not on the improvement of human readers with AI assistance. The device is an automated analyzer, not an AI-assisted diagnostic tool for human interpretation.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, this study represents a standalone (algorithm only) performance evaluation. The coagulation analyzer is an automated system, and the study compares its measurements generated by different software versions to each other, without human interpretation being part of the primary measurement process.

    7. The Type of Ground Truth Used

    The ground truth for this comparison was the measurements obtained from the predicate device or the current software versions of the Sysmex® Automated Coagulation Analyzers. The studies were "method comparison evaluations" comparing "Software Version 21 versus Software Version 24" for the CA-1000 and CA-5000 systems. This indicates that the results of the established, previously cleared software/device served as the reference for evaluating the new software/device.

    8. The Sample Size for the Training Set

    The document does not explicitly mention a "training set" for the software. This is common for traditional laboratory instruments where the software's algorithms are based on established biochemical principles and calibrated during development rather than "trained" on a large dataset in the machine learning sense. The clinical study described evaluates the performance of the already developed software.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, a "training set" in the context of machine learning is not applicable here. The algorithms within the coagulation analyzer software are developed based on known coagulation assay principles and calibrated using standard reference materials and methods, not by establishing ground truth from a large, labeled training dataset in the same way an AI model would be.

    Ask a Question

    Ask a specific question about this device

    K Number
    K964375
    Device Name
    SYSMEX SE/RAM-1
    Manufacturer
    Date Cleared
    1997-03-13

    (132 days)

    Product Code
    Regulation Number
    864.5200
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SE/RAM-1 is an In Vitro device for whole blood cell analysis in clinical laboratories. The SE-9000 portion is intended for blood cell analysis; the RAM-1 portion is intended for automated reticulocyte analysis. The instrument is a screening device for identifying abnormal blood specimens. Medical technologists are responsible for final review of abnormal cells.

    Device Description

    The SE/RAM-1 is a table-top analyzer system consisting of a blood cell counting unit (SE-9000), a reticulocyte unit (RAM-1), a Data Management System, and a printer. A Sampler Unit is attached to the SE-9000 unit, which transports samples in cassette racks to the ID bar code reader and then to the Automatic mixing/Cap piercing device.

    The SE/RAM-1 system has the capability of reporting up to 23 parameters. The 23 reportable parameters are as follows: RBC, Hgb, HCT, MCV, MCH, MCHC,RDW-CV, RDW-SD. PLT. MPV. WBC. NEUT%. LYMPH%. MONO%. EO%. BASO%, NEUT#, LYMPH#, MONO#, EO#, BASO# (all from SE-9000 unit), RETIC %, RETIC# (from RAM-1 unit).

    The SRV of the SE-9000 contains a port (in the original design of the instrument) by which the reticulocyte module(RAM-1) is attached. The SRV splits the blood sample into 8 aliquots. 7 aliquots are delivered to the SE-9000 system, diluted with the appropriate diluent or lyse and sent to their respective detector blocks for analysis. The 8th aliquot is sent to the RAM-1 where it is diluted and stained, then sent to the flow cell for analysis. The SE/RAM-1 system uses a total of up to 13 reagents. All reagents are the same as used on the stand-alone SE-9000 and the R-3000 analyzers.

    AI/ML Overview

    The provided text describes a 510(k) Summary of Safety and Effectiveness for the Sysmex™ SE/RAM-1 device. This document focuses on demonstrating substantial equivalence to previously cleared devices (Sysmex™ SE-9000 and R-3000) rather than presenting a detailed study with explicit acceptance criteria and performance metrics for a novel device. Therefore, much of the requested information regarding acceptance criteria, specific performance studies, sample sizes, expert involvement, and ground truth establishment is not available in this document.

    The core argument for substantial equivalence is that the SE/RAM-1 is a combination of existing, cleared technologies with only minor software and tubing modifications, and that these modifications do not raise new issues of safety and effectiveness.

    Here's an attempt to answer your questions based on the provided text, highlighting what is (and isn't) present:


    1. A table of acceptance criteria and the reported device performance

    This document does not present specific acceptance criteria in the format often seen for AI/novel device clearances (e.g., minimum sensitivity, specificity, accuracy thresholds). Instead, the "performance" of the SE/RAM-1 is primarily asserted to be "same" as the predicate devices based on the principle of substantial equivalence.

    Acceptance Criteria CategoryAcceptance Criteria (Not explicitly stated as such, but inferred from equivalence claim)Reported Device Performance (Asserted equivalent to predicate devices)
    Intended UseMust match or be substantially equivalent to predicate devices.Same as SE-9000 (CBC analysis) and R-3000 (reticulocyte analysis).
    Sample TypeMust process the same sample types as predicate devices.Same as SE-9000 and R-3000 (EDTA anticoagulated whole blood).
    Physical SafetyMust maintain the same safety profile as predicate devices.Same as SE-9000 (K936023) and R-3000 (K912494).
    Performance (General)Performance characteristics must be equivalent to predicate devices.Same as SE-9000 (K936023) and R-3000 (K912494).
    Principles of OperationMust utilize the same fundamental technologies.RF/DC, Sheath flow (for SE-9000 portion); fluorescence intensity, forward scatter (for RAM-1 portion).
    Physical HardwareMust use substantially similar hardware components.Hydraulic, electronic systems; shared pneumatic unit, SE sampler/barcode reader.
    Software ProgramsFunctionality should be consistent with predicate devices, potentially with enhancements.Full color display, 10000 sample storage, 12 QC files (RAM-1 portion).
    ReagentsMust use the same reagents as predicate devices.Same as used on stand-alone SE-9000 and R-3000 analyzers.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This information is not provided in the document. The submission relies on the prior clearance of the SE-9000 and R-3000, implying that their performance data supports the combined system. No new, specific test set data for the SE/RAM-1 is detailed here.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. As new specific performance studies for the SE/RAM-1 are not detailed, there's no mention of experts establishing ground truth for a new test set.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This document describes an in vitro diagnostic device for automated blood cell analysis, not an AI-assisted interpretation tool for human readers. Therefore, an MRMC comparative effectiveness study comparing human readers with and without AI assistance is not applicable and was not performed. The device is a screening tool, with "Medical technologists...responsible for final review of abnormal cells," implying that human expertise is still required for confirmation, but not in a "human-in-the-loop" AI augmentation sense for primary interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device itself is a "standalone" automated analyzer in the sense that it performs the analysis and generates parameters without direct human intervention in the measurement process. However, it is explicitly stated that it is a "screening device for identifying abnormal blood specimens" and that "Medical technologists are responsible for final review of abnormal cells." This means the final diagnostic decision is not purely algorithmic. The document does not describe specific standalone performance studies for this combined device; instead, it refers to the performance of the predicate devices (K936023 for SE-9000 and K912494 for R-3000).

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    This information is not provided in this document. Any ground truth used for the predicate devices (SE-9000 and R-3000) would have been established during their respective clearance processes, but details are not given here. For blood analyzers, ground truth often involves manual microscopy and/or other validated laboratory methods.

    8. The sample size for the training set

    This information is not provided in the document. As this is not an AI/machine learning device being trained, the concept of a "training set" as commonly understood in that context does not apply. The device's operational principles are based on established biophysical detection methods.

    9. How the ground truth for the training set was established

    This information is not provided and is not applicable as there is no mention of a training set for an AI/ML model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K961054
    Device Name
    SYSMEX UF-100
    Manufacturer
    Date Cleared
    1996-10-28

    (224 days)

    Product Code
    Regulation Number
    864.5200
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    UF-Check™ is intended for use in the quality control of Sysmex UF-100™ automated urine analyzer. The Sysmex UF-100 is a in vitro medical device for use in urinalysis in clinical laboratories to replace microscopic review of normal and abnormal specimens and to flag specimens containing certain abnormalities.

    Device Description

    UF-Check is a suspension of particles representing red blood cells, white blood cells, epithelial cells, casts, and bacteria in a liquid medium. Sysmex UF-Check is supplied in glass bottles containing 47 ml volumes. Three bottles - one of each level- are packaged in one box.

    AI/ML Overview

    Here's a breakdown of the provided text in relation to your request about acceptance criteria and a study proving device fitness.

    It's important to note that the provided text is a 510(k) Summary for a medical device control solution, not a diagnostic device that analyzes patient data. Therefore, many of your requested items, particularly those related to "ground truth," "expert consensus," "training sets," and "human-in-the-loop," don't directly apply in the context of this control device. Controls are used to verify the performance of an analyzer, not to diagnose patients.

    However, I will extract what information is present and explain why other parts are not applicable.


    Acceptance Criteria and Device Performance

    The provided document describes a control solution (UF-Check™) for an automated urine analyzer (Sysmex UF-100™). The "performance" of this control solution would be its ability to consistently produce expected values when run on the analyzer and to remain stable over time.

    Acceptance Criteria (Implied)Reported Device Performance
    Consistency within a single run (Within Run Precision)"Results were consistent and gave acceptable performance." (No specific numerical range provided in this summary, but implies the variability within a single run met pre-defined acceptable limits).
    Consistency across different manufacturing lots (Within Lot Precision)"Results were consistent and gave acceptable performance." (Implies that the variation between batches of the control solution was within acceptable limits).
    Stability over its intended shelf life (Long Term Stability)"Results were consistent and gave acceptable performance." and "Study results show UF-Check to be consistently reproducible and stable for the entire product dating." (Confirms the control maintains its integrity and performance characteristics throughout its stated shelf life).
    Ability to challenge key analyzer functions"UF-Check performs like a five-part differential control which when run in the QC mode gives values for the measurement of parameters. When run in the QC mode, all systems are checked for performance such as correct addition of dye, correct particle sizing, and correct enumeration of elements." (Implies the control effectively tests the critical functions of the Sysmex UF-100™ analyzer).
    Safety and Effectiveness"UF-Check is a safe and effective urinalysis control when used as instructed in the product package insert." (General conclusion of the studies).

    Study Details (Based on the Provided Text)

    1. Sample Size used for the test set and the data provenance:

      • Sample Size: Not explicitly stated in the summary. The text mentions "Studies were performed" (plural), but does not quantify the number of runs, samples, or lots included in these studies for Within Run Precision, Within Lot Precision, and Long Term Stability.
      • Data Provenance: Not explicitly stated. Given it's a 510(k) summary filed by Sysmex Corporation in Long Grove, IL, it's highly probable the studies were conducted internally or through collaborators within the US or by Sysmex's global R&D. The data is retrospective in the sense that the studies were completed before the 510(k) submission.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not Applicable in this context. For a control solution, "ground truth" is typically established by the manufacturer through rigorous analytical characterization methods using reference instruments and gravimetric/volumetric precision measurements during the manufacturing process. The "experts" would be the scientists and engineers involved in developing and characterizing the control, but not in the sense of clinical experts interpreting diagnostic results.
    3. Adjudication method for the test set:

      • Not Applicable. Adjudication is usually for subjective interpretations of diagnostic results. For a control solution, performance is measured against pre-defined analytical specifications and expected ranges.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not Applicable. This is a control solution, not an AI-powered diagnostic device. There are no "human readers" interpreting the control's results in a diagnostic fashion, nor is there AI performing diagnoses. The human interaction is usually limited to loading the control onto the analyzer and reviewing the quantitative results generated by the analyzer for quality control purposes.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Not Applicable. Again, this is a control solution, not an algorithm or diagnostic device. The "standalone" performance for the control would be its intrinsic characteristics and stability, which are what the precision and stability studies assessed.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • Analytical Characterization / Manufacturing Specifications. As explained in point 2, the "truth" for a control solution is its meticulously defined composition and the analytical values ("assayed control") it is designed to produce within specific ranges on the target analyzer. This is established through the manufacturer's internal quality assurance and scientific methods during the development and production of the control.
    7. The sample size for the training set:

      • Not Applicable. Control solutions do not have "training sets" in the machine learning sense. Their "performance" is inherent to their chemical and physical composition and manufacturing process, validated through analytical studies.
    8. How the ground truth for the training set was established:

      • Not Applicable. No training set.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1