Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K983151
    Date Cleared
    1998-11-05

    (57 days)

    Product Code
    Regulation Number
    862.3320
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The IMMAGE® Immunochemistry System Digoxin (DIG) Reagent, when used in conjunction with Beckman Coulter's IMMAGE® Immunochemistry Systems and IMMAGE® Immunochemistry Systems Drug Calibrator 2, is intended for the quantitative determination of digoxin in human serum or plasma by turbidimetric immunoassay.

    The IMMAGE® Immunochemistry Systems Drug Calibrator 2, used in conjunction with IMMAGE® Digoxin reagent, is intended for use on Beckman Coulter's IMMAGE® Immunochemistry Systems for the calibration of digoxin test systems.

    Device Description

    The IMMAGE® Immunochemistry System Digoxin (DIG) Reagent is designed for optimal performance on the IMMAGE® Immunochemistry Systems. It is intended for the quantitative determination of Digoxin in serum and plasma.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Beckman Coulter IMMAGE® Immunochemistry System Digoxin (DIG) Reagent, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state "acceptance criteria" as a set of predefined thresholds that the device had to meet to be approved. Instead, it presents "performance data" which demonstrates substantial equivalence to a predicate device. For the purpose of this request, I will infer the "acceptance criteria" from typical requirements for such devices, focusing on linearity/correlation and precision, and then list the reported performance.

    Performance MetricInferred Acceptance Criteria (Typical for such devices)Reported Device Performance (IMMAGE® DIG Reagent)
    Method Comparison (Correlation with Predicate)
    SlopeClose to 1.0 (e.g., 0.95 - 1.05)1.051
    Intercept (ng/mL)Close to 0.0 (e.g., ± 0.15)0.13 ng/mL
    Correlation Coefficient (r)≥ 0.97 (indicating strong linear correlation)0.993
    Imprecision (CV%)
    Within-Run Imprecision (Level 1)≤ 10%7.2 %C.V.
    Within-Run Imprecision (Level 2)≤ 5%2.6 %C.V.
    Within-Run Imprecision (Level 3)≤ 5%2.7 %C.V.
    Total Imprecision (Level 1)≤ 10%7.4 %C.V.
    Total Imprecision (Level 2)≤ 5%2.8 %C.V.
    Total Imprecision (Level 3)≤ 5%3.0 %C.V.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size:
      • Method Comparison: 113 samples (presumably human serum)
      • Imprecision: 80 measurements per level (3 levels tested, so 240 measurements in total).
    • Data Provenance: Not explicitly stated (e.g., country of origin). However, given it's a 510(k) submission to the US FDA, it likely involves data from studies conducted in the US or regions adhering to similar regulatory standards. The samples are referred to as "serum," implying human biological samples.
    • Retrospective or Prospective: Not explicitly stated, but method comparison and imprecision studies are typically conducted prospectively with available samples.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This type of in-vitro diagnostic (IVD) device does not typically rely on "experts" to establish a ground truth in the way medical imaging or clinical diagnostic devices might. Instead, the "ground truth" for method comparison is the measurement obtained from the predicate device, and for imprecision, it's the statistical analysis of repeated measurements of known control samples.

    Therefore, this section is not applicable in the traditional sense for this type of device and study. The predicate device (Abbott TDx Digoxin II) serves as the "reference method" for comparison.

    4. Adjudication Method for the Test Set

    This is not applicable for this type of IVD device and study. Adjudication methods (like 2+1, 3+1) are typically used for establishing ground truth in subjective diagnostic tasks where multiple human readers interpret data (e.g., radiology, pathology). Here, results are quantitative measurements.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This is not applicable. This device is an in-vitro diagnostic (IVD) immunoassay system that provides quantitative measurements of digoxin in serum/plasma. It is not an AI-powered diagnostic tool for human readers, nor does it have a human-in-the-loop component for interpretation that would warrant an MRMC study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    The performance data presented ("Method Comparison Study Results" and "Estimated IMMAGE System Digoxin (DIG) Reagent Imprecision") represents the standalone performance of the IMMAGE® Immunochemistry System Digoxin (DIG) Reagent itself. It measures the device's ability to quantitatively determine digoxin levels without any human interpretive intervention.

    7. The Type of Ground Truth Used

    • For Method Comparison: The "ground truth" was established by the predicate device, the Abbott TDx Digoxin II. The IMMAGE® DIG Reagent's measurements were compared against the measurements obtained from this established, legally marketed device.
    • For Imprecision: The "ground truth" was derived from control samples with known or well-characterized digoxin concentrations. The imprecision study assesses the device's reproducibility and precision when repeatedly measuring these controls.

    8. The Sample Size for the Training Set

    The document does not provide any information about a "training set" sample size. For an immunoassay reagent, the development process (which might involve optimization or "training" of assay parameters) is typically based on laboratory experiments and analytical validation rather than a distinct "training set" of patient samples in the way machine learning algorithms are trained. The data presented here is for analytical performance validation.

    9. How the Ground Truth for the Training Set was Established

    As no "training set" is described for this type of device, this point is not applicable. The ground truth for the analytical validation (test set) was established by comparison with a predicate device and characterized control samples, as described in point 7.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1