Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K031270
    Date Cleared
    2003-05-06

    (15 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ACCESS CEA REAGENTS FOR USE ON THE ACCESS IMMUNOASSAY SYSTEMS

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access CEA assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of Carcinoembryonic Antigen (CEA) levels in human serum, using the Access Immunoassay Systems. CEA measured by the Access Immunoassay Systems is used as an aid in the management of cancer patients in whom changing CEA concentrations have been observed.

    Device Description

    The Access® CEA reagents consist of reagent packs, calibrators, bi-level controls, substrate and wash buffer.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a modification to the Access® CEA Reagents on the Access® Immunoassay Systems. The modification involves adding a new instrument platform, the Beckman Coulter UniCel™ Dxl 800 Access® Immunoassay System, to the existing family of Access Immunoassay Systems.

    The study aimed to demonstrate that the Access CEA assay on the Dxl system is substantially equivalent to the Access CEA assay on the Access 2 system.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document states that the Access CEA assay met established acceptance criteria for "method comparison, precision and analytical sensitivity." However, the specific numerical acceptance criteria for these parameters (e.g., specific ranges for agreement, coefficients of variation, or limits of detection) are not detailed in the provided summary. Similarly, the reported performance values from the studies (e.g., actual method comparison results, precision data, or analytical sensitivity figures) are not provided. The text only offers a general statement of compliance.

    Acceptance Criteria (Specifics Not Provided)Reported Device Performance (Specifics Not Provided)
    Method Comparison (e.g., % agreement, bias limits)Met established criteria
    Precision (e.g., %CV, within-run, between-run)Met established criteria
    Analytical Sensitivity (e.g., Limit of Detection)Met established criteria

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify the sample sizes used for the method comparison, precision, or analytical sensitivity studies. It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature of the samples).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This device is an immunoassay for quantitative determination of Carcinoembryonic Antigen (CEA) levels, which relies on a chemical reaction to produce a numerical result. Therefore, there is no ground truth established by human experts in the same way it would be for an imaging device requiring expert interpretation. The "ground truth" for evaluating this device would be established by reference methods or established analytical standards. The document does not provide details on how this was established for the comparison.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Since this is an immunoassay seeking substantial equivalence to a predicate device, the "adjudication method" in the context of human expert review is not applicable. The evaluation relies on quantitative analytical comparisons, not human interpretation consensus.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not performed. This type of study is relevant for medical imaging or diagnostic interpretation tasks where human readers are involved. This device is an automated immunoassay system, and its evaluation focuses on analytical performance metrics rather than human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the studies described (method comparison, precision, and analytical sensitivity) inherently represent standalone performance of the algorithm/device. The device itself performs the quantitative determination of CEA levels. Human involvement would be in operating the instrument and interpreting the numerical output, but the performance being evaluated is that of the automated system. The document states that the new system (Dxl) uses the same reagents and calibrators as the predicate (Access 2), implying that the algorithm/assay itself is unchanged, and the evaluation is on the new instrument platform's ability to produce equivalent results.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The "ground truth" for these types of studies would typically be established by:

    • Predicate device results: For method comparison, the results from the legally marketed Access® CEA Reagents on the Access® Immunoassay Analyzer (K981985, K991707) would serve as the comparator or "reference."
    • Reference materials/standards: For precision and analytical sensitivity, the device would be tested against known concentrations of CEA or characterized control materials.

    The document does not explicitly state the type of ground truth used beyond indicating that method comparison was against the predicate device.

    8. The sample size for the training set

    The document does not mention a training set. This is because the device is an immunoassay kit/system, not an artificial intelligence or machine learning algorithm that requires a distinct training phase. The "training" of such a system would involve its initial development and validation by the manufacturer, but not in the context of a dataset used to optimize an AI model.

    9. How the ground truth for the training set was established

    As there is no "training set" in the context of an AI/ML algorithm for this immunoassay device, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1