Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K240479

    Validate with FDA (Live)

    Date Cleared
    2024-05-10

    (80 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access OV Monitor assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of CA 125 antigen levels in human serum and plasma using the Access Immunoassay Systems. This device is indicated for use in the measurement of CA 125 antigen to aid in the management of ovarian cancer patients. Serial testing for patient CA 125 antigen concentrations should be used in conjunction with other clinical methods used for monitoring ovarian cancer.

    Device Description

    The Access OV Monitor assay is a sandwich immunoenzymatic assay. The Access OV Monitor assay consists of the reagent pack and calibrators. Other items needed to run the assay include substrate and wash buffer. The Access OV Monitor assay reagent pack, Access OV Monitor assay calibrators, along with the UniCel Dxl Wash Buffer II are designed for use with the Dxl 9000 Access Immunoassay Analyzer in a clinical laboratory setting.

    AI/ML Overview

    The document provided is an FDA 510(k) clearance letter and a 510(k) summary for the Beckman Coulter Access OV Monitor assay. This device is an in vitro diagnostic (IVD) immunoassay for measuring CA 125 antigen levels to aid in the management of ovarian cancer patients.

    The information requested pertains to the performance study design for AI/ML-based diagnostic devices, which is typically quite different from the validation of an immunoassay. Specifically, sections like "Number of experts used to establish ground truth", "Adjudication method", "Multi-Reader Multi-Case (MRMC) comparative effectiveness study", and "Effect size of how much human readers improve with AI vs without AI assistance" are relevant to AI/ML device validation studies, not typically to immunoassay validation.

    An immunoassay like the Access OV Monitor is validated by demonstrating its analytical performance characteristics (e.g., precision, linearity, limits of detection) and method comparison against a predicate device, rather than by human reader studies or expert consensus on images.

    Therefore, many of the requested fields are not applicable to this type of device and the information provided in the document. However, I will do my best to extract the relevant information where it exists and explicitly state when a requested criteria is not applicable.


    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for the Access OV Monitor assay:

    Device: Access OV Monitor Immunoassay

    The study primarily focuses on demonstrating the substantial equivalence of the new Access OV Monitor assay run on the Dxl 9000 Access Immunoassay Analyzer to its predicate device (Access OV Monitor assay on the Access Immunoassay System, K023597). This is achieved by comparing their analytical performance characteristics.

    1. Table of Acceptance Criteria and Reported Device Performance

    Note: The document does not explicitly state "acceptance criteria" in a tabulated format for each performance metric, but rather lists the study results and implies they met predefined criteria (e.g., "met the acceptance criteria of R2 ≥ 0.90 and slope 1.00 ± 0.09"). The "acceptance criteria" column below is inferred from these statements and typical IVD validation expectations.

    Performance MetricImplied Acceptance Criteria (Inferred from text)Reported Device Performance (Access OV Monitor on Dxl 9000)
    Method Comparison
    R² (Concordance)R² ≥ 0.901.00
    Slope1.00 ± 0.090.98 (95% CI: 0.97 - 0.99)
    Intercept(Not explicitly stated numeric criterion, evaluated with CI)-0.14 (95% CI: -0.38 - 0.13)
    Imprecision (Within-Laboratory/Total %CV)(Performance depends on concentration level; generally, lower %CV desired)Ranged from 2.6% to 6.1% for concentrations > 15 U/mL. SD of 0.2 for concentrations ≤ 15 U/mL. (See full table in source document)
    LinearityDevice should be linear across its analytical measuring intervalLinear throughout the analytical measuring interval of approximately 2.0 - 5,000 U/mL
    Limit of Blank (LoB)(Specific value to be determined and met; typically lowest possible)0.5 U/mL
    Limit of Detection (LoD)(Specific value to be determined and met)0.7 U/mL
    Limit of Quantitation (LoQ)(Specific value to be determined and met)2.0 U/mL
    Measuring RangeConsistent with predicate and intended use2.0 - 5,000 U/mL (Compared to predicate's 0.5 - 5000 U/mL)
    Sample Volume(Not an "acceptance criterion" but a characteristic change)30 uL (Predicate: 25 uL)
    Substrate(Not an "acceptance criterion" but a characteristic change)Lumi-Phos PRO substrate (Predicate: Access Substrate)

    2. Sample Size Used for the Test Set and Data Provenance

    • Method Comparison Study: 152 samples.
    • Imprecision Study: 120 replicates per sample level (e.g., Sample 1, Sample 2, etc, see N column in table). "Multiple samples" tested in triplicate in 2 runs per day for a minimum of 20 days.
    • Data Provenance: The document does not specify the country of origin of the data or whether samples were retrospective or prospective. It is typical for immunoassay validation studies to use a mix of clinical samples (retrospective) and spiked samples.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • Not Applicable. This is an immunoassay, not an AI/ML diagnostic imaging device. The "ground truth" for the performance characteristics of an immunoassay is its analytical measurements, often compared against a reference method or validated predicate, not expert consensus on images.

    4. Adjudication Method for the Test Set

    • Not Applicable. See point 3.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not Applicable. This is an immunoassay, not an AI/ML diagnostic imaging device intended to assist human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not Applicable. This is an immunoassay, the "performance" is the analytical output of the instrument-reagent system itself, which is inherently "standalone" in generating the quantitative result. There's no separate "algorithm" performance in the sense of an AI model.

    7. The Type of Ground Truth Used

    For an immunoassay, "ground truth" refers to the true concentration of the analyte, which is established through:

    • Reference Methods: Highly accurate and precise methods not explicitly detailed but implied by standard validation practices.
    • Comparative Measurements against a Predicate Device: The current study uses the predicate device (Access OV Monitor on the Access Immunoassay System) as its primary comparator to establish substantial equivalence.
    • Known Concentrations: For linearity and limits studies, samples are often prepared at known concentrations (e.g., by diluting a high-concentration sample).

    8. The Sample Size for the Training Set

    • Not Applicable. This is an immunoassay, not an AI/ML device that requires a "training set" in the machine learning sense. The assay is "trained" or developed through biological and chemical methods, and its performance is characterized through analytical validation.

    9. How the Ground Truth for the Training Set was Established

    • Not Applicable. See point 8.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1