Search Filters

Search Results

Found 411 results

510(k) Data Aggregation

    K Number
    K243168
    Date Cleared
    2025-06-20

    (263 days)

    Product Code
    Regulation Number
    866.3510
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Alinity i Rubella IgG

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alinity i Rubella IgG assay is a chemiluminescent microparticle immunoassay (CMIA) used for the quantitative determination of IgG antibodies to rubella virus in human serum, serum separator, and plasma tubes (lithium heparin, lithium heparin separator, and tripotassium EDTA) on the Alinity i system.

    The Alinity i Rubella IgG assay is to be used as an aid in the determination of immune status to rubella in individuals including women of child-bearing age.

    The Alinity i Rubella IgG assay has not been cleared for use in screening blood, plasma, or tissue donors.

    The performance of this device has not been established for cord blood or neonatal samples. Likewise, performance has not been established for populations of immunocompromised or immunosuppressed individuals.

    Device Description

    The Alinity i Rubella IgG assay is an automated, two-step immunoassay for the quantitative determination of anti-rubella IgG in human serum and plasma using chemiluminescent microparticle immunoassay (CMIA) technology.

    Sample, partially purified rubella virus-coated paramagnetic microparticles, and assay diluent are combined and incubated. The anti-rubella IgG present in the sample bind to the rubella virus coated microparticles. The mixture is washed. Anti-human IgG acridinium-labeled conjugate is added to create a reaction mixture and incubated. Following a wash cycle, Pre-Trigger and Trigger Solutions are added.

    The resulting chemiluminescent reaction is measured as a relative light unit (RLU). There is a direct relationship between the amount of anti-rubella IgG in the sample and the RLU detected by the system optics.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets those criteria, based on the provided FDA 510(k) clearance letter for the Alinity i Rubella IgG assay.

    Overview of the Device and its Purpose:

    The Alinity i Rubella IgG assay is a chemiluminescent microparticle immunoassay (CMIA) used for the quantitative determination of IgG antibodies to the rubella virus. It's intended to aid in determining the immune status to rubella, particularly in women of child-bearing age. It is a diagnostic device, not an AI/ML-driven one, so some of the requested points regarding AI/ML studies (like MRMC studies, training set details, expert ground truth establishment for AI) are not applicable.


    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a diagnostic assay and not an AI/ML device, the acceptance criteria are related to the analytical and clinical performance of the immunoassay itself rather than metrics like AUC, sensitivity/specificity for object detection, or F1 scores inherent to AI. The key performance indicators are Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to a composite comparator method.

    Acceptance Criteria (Implied by Performance Targets in Context of 510(k) Equivalence):

    For a 510(k) substantial equivalence determination, the new device must demonstrate performance that is as safe and effective as a legally marketed predicate device. While explicit numerical acceptance criteria for PPA and NPA are not stated in the summary, typical expectations for diagnostic assays like this are high agreement rates (e.g., >90% or 95%) with the comparator method, especially in categories such as "Reactive" and "Nonreactive." The confidence intervals should also demonstrate a reasonable level of certainty around these agreement rates. The acceptance of the listed performance values below implies that these meet the FDA's criteria for substantial equivalence to the predicate.

    Performance CategoryAcceptance Criteria (Implied)Reported Device Performance (Alinity i Rubella IgG)
    PPA (Overall, Medical Decision Point ≥ 10 IU/mL)High agreement (e.g., >90%) with comparator for positive samples.Routine Order (US): 95.36% (95% CI: 93.74, 96.57)
    Routine Order (OUS): 97.67% (95% CI: 95.64, 98.77)
    Pregnant Females (US): 95.24% (95% CI: 92.60, 96.97)
    **NPA (Overall, Medical Decision Point 90%) with comparator for negative/equivocal samples.Routine Order (US): 97.62% (95% CI: 91.73, 99.34)
    Routine Order (OUS): 95.71% (95% CI: 88.14, 98.53)
    Pregnant Females (US): 96.49% (95% CI: 88.08, 99.03)
    CDC Panel Agreement - PPAHigh PPA against CDC reference panel.93.9% (95% CI: 86.51, 97.37)
    CDC Panel Agreement - NPAHigh NPA against CDC reference panel.100.0% (95% CI: 82.41, 100.00)
    Precision (Within-Laboratory) - Max %CV for controls & panels (approx.)Acceptable variability for quantitative measurements (e.g.,
    Ask a Question

    Ask a specific question about this device

    K Number
    K242022
    Device Name
    Access Toxo IgG
    Date Cleared
    2025-03-28

    (260 days)

    Product Code
    Regulation Number
    866.3780
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Access Toxo IgG

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access Toxo IgG assay is a paramagnetic-particle, chemiluminescent immunoassay for the qualitative and quantitative determination of IgG antibodies to Toxoplasma gondii in human serum using the Access Immunoassay Systems. The Access Toxo IgG assay aids in the diagnosis of Toxoplasma gondii infection and may be used to assess the immune status of pregnant women.

    This product is not FDA cleared/approved for the screening of blood or plasma donors. Assay performance characteristics have not been established for immunocompromised or immunosuppressed patients, cord blood, neonatal specimens or infants.

    Device Description

    The Access Toxo IgG assay is a paramagnetic-particle, chemiluminescent immunoassay for the qualitative and quantitative detection of Toxoplasma gondii-specific IgG antibody in adult human serum using the Access Immunoassay Systems.

    The Access Toxo IgG assay consists of the reagent pack, calibrators, and quality controls (OCs), packaged separately. Other items needed to run the assay include substrate and wash buffer.

    AI/ML Overview

    This document describes the premarket notification (510(k)) for the Beckman Coulter Access Toxo IgG assay, a chemiluminescent immunoassay for detecting IgG antibodies to Toxoplasma gondii in human serum. This product is intended to aid in the diagnosis of Toxoplasma gondii infection and assess the immune status of pregnant women.

    The submission claims substantial equivalence to a legally marketed predicate device, the Access Toxo IgG assay (K080869). The primary difference highlighted is the instrument used: the new device runs on the DxI 9000 Access Immunoassay Analyzer, while the predicate runs on the Access 2 Immunoassay System.

    Here's an analysis of the provided information, focusing on the study that proves the device meets the acceptance criteria:

    1. Table of Acceptance Criteria and Reported Device Performance

    Strictly speaking, the document does not present "acceptance criteria" in a separate table with yes/no compliance. Instead, it details specific performance metrics and their measured values. The implicit acceptance criterion for most analytical performance studies (like imprecision and method comparison) is that the new device's performance is acceptable for its intended use and comparable to or better than the predicate. For Linearity, LoB, LoD, and LoQ, the acceptance criterion is that the study supports the claimed values.

    Performance CharacteristicAcceptance Criteria (Implicit from Study Design/Claims)Reported Device Performance (Access Toxo IgG on DxI 9000)
    Method Comparison (vs. Access 2 Immunoassay System)High Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) to demonstrate interchangeability between instruments.PPA: 100.00% (40/40) with 95% CI = 91.24% to 100% (for Reactive samples)
    NPA: 100.00% (99/99) with 95% CI = 96.26% to 100.00% (for Non-Reactive samples)
    Imprecision (Within-Laboratory)SD 3.2 IU/mL. (These are the design criteria mentioned, implying they are the acceptance threshold.)Sample 1 (2.7 IU/mL): Overall Precision SD 0.38 (13.9% CV) - *Meets SD criterion (0.38
    Ask a Question

    Ask a specific question about this device

    K Number
    K243575
    Manufacturer
    Date Cleared
    2025-02-12

    (85 days)

    Product Code
    Regulation Number
    866.3305
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ARCHITECT HSV-2 IgG, ARCHITECT HSV-2 IgG Calibrator, ARCHITECT HSV-2 IgG Controls

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ARCHITECT HSV-2 IgG assay is a chemiluminescent microparticle immunoassay (CMIA) used for the qualitative detection of specific IgG antibodies to herpes simplex virus type 2 (HSV-2) in human serum (collected in serum and serum separator tubes) and plasma (collected in dipotassium EDTA, lithium heparin plasma separator tubes) on the ARCHITECT i System.

    The ARCHITECT HSV-2 IgG assay is to be used for testing sexually active adults or expectant mothers to aid in the presumptive diagnosis of HSV-2 infection. The test results may not determine the state of active lessociated disease manifestations, particularly for primary infection. The predictive value of a reactive or nonreactive result depends on the prevalence of HSV-2 infection in the population and the pre-test likelihood of HSV-2 infection.

    NOTE: The performance of the ARCHITECT HSV-2 IgG assay has not been established for use in the pediatric population, for neonatal screening, or for testing immunosompromised or immunosuppressed patients. The assay has not been FDA cleared or approved for screening blood or plasma donors.

    Device Description

    The ARCHITECT HSV-2 IgG assay is an automated, two-step immunoassay for the qualitative detection of IgG antibodies to HSV-2 in human serum and plasma using chemiluminescent microparticle immunoassay (CMIA) technology.

    The kit contains different components: Reagent (microparticles, conjugate and assay diluent), Calibrator, and external Controls (reactive and nonreactive).

    AI/ML Overview

    The document describes the ARCHITECT HSV-2 IgG assay, a diagnostic device for detecting specific IgG antibodies to herpes simplex virus type 2 (HSV-2). The study aims to demonstrate that this new device is substantially equivalent to legally marketed predicate devices.

    While the document does not explicitly state "acceptance criteria" in the format of a separate table setting thresholds beforehand, the performance summary sections detail the studies and the observed performance. The key performance metrics demonstrated are:

    • Clinical Performance (Positive Percent Agreement - PPA and Negative Percent Agreement - NPA): This is the primary measure of the device's accuracy in correctly identifying positive and negative samples for HSV-2 IgG antibodies.
    • Precision (Within-Laboratory and Reproducibility): These studies evaluate the consistency and reliability of the assay results across different runs, days, and sites.
    • Analytical Specificity (Interference and Cross-reactivity): These studies assess the device's ability to accurately measure HSV-2 IgG without being affected by other substances or related conditions.
    • Specimen Collection Types: This confirms the assay's performance across various accepted sample types.
    • Carry-Over: Verifies that prior samples do not affect subsequent sample results.

    Here's an interpretation of the implied acceptance criteria and reported performance based on the provided document:

    Acceptance Criteria and Reported Device Performance

    Performance MetricImplicit Acceptance Criteria (Inferred from study design/general diagnostic device standards)Reported Device Performance
    Clinical PerformanceHigh Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to a composite comparator, indicating strong diagnostic accuracy.Sexually Active Population:
    • PPA: 96.54% (223/231) with 95% CI: 93.32% to 98.24%
    • NPA: 96.90% (375/387) with 95% CI: 94.66% to 98.22%

    Pregnant Population:

    • PPA: 95.12% (78/82) with 95% CI: 88.12% to 98.09%
    • NPA: 98.60% (212/215) with 95% CI: 95.98% to 99.52%

    CDC Panel Agreement:

    • PPA (Reactive samples): 100% (30/30)
    • NPA (Nonreactive samples): 97.14% (68/70) |
      | Precision (Within-Laboratory) | Low %CV for different panels and controls, demonstrating consistent results within the laboratory. | 20-Day Within-Laboratory Precision:
    • Positive Control: Mean S/CO 3.01, Within-Laboratory %CV 3.9
    • Serum Panel 2: Mean S/CO 1.60, Within-Laboratory %CV 6.8
    • Serum Panel 3: Mean S/CO 2.47, Within-Laboratory %CV 11.6
    • Plasma Panels: %CVs ranging from 3.3 to 5.7

    12-Day Within-Laboratory Precision (Higher Analyte Levels):

    • Serum Panel 4: Mean S/CO 7.14, Within-Laboratory %CV 5.2
    • Serum Panel 5: Mean S/CO 14.73, Within-Laboratory %CV 4.6
    • Plasma Panel 4: Mean S/CO 7.85, Within-Laboratory %CV 4.5
    • Plasma Panel 5: Mean S/CO 14.90, Within-Laboratory %CV 5.0 |
      | Precision (Reproducibility) | Low %CV across multiple sites/instruments, demonstrating consistent results regardless of testing location. | Reproducibility (3 testing sites):
    • Positive Control: Mean S/CO 2.98, Reproducibility %CV 5.2
    • Serum Panel 2: Mean S/CO 1.56, Reproducibility %CV 4.0
    • Serum Panel 3: Mean S/CO 2.52, Reproducibility %CV 4.3
    • Plasma Panels: %CVs ranging from 5.2 to 5.4 |
      | Analytical Specificity (Interference) | Minimal impact (
    Ask a Question

    Ask a specific question about this device

    K Number
    K243374
    Date Cleared
    2025-01-28

    (90 days)

    Product Code
    Regulation Number
    864.7695
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    HemosIL CL HIT-IgG(PF4-H)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    HemosIL CL HIT-IgG(PF4-H) is a qualitative, fully automated, chemiluminescent immunoassay (CIA) for the detection of IgG antibodies that react with Platelet Factor 4 (PF4) when complexed to heparin. The assay is for use in human 3.2% citrated plasma on the ACL TOP 970 CL in a laboratory setting.

    The result provided by the assay should be interpreted as either positive or negative based on the assay cut-off (1.00 U/mL). The positive or negative result aids in determining the risk for heparin induced thrombocytopenia (HIT) when used in conjunction with other laboratory and clinical findings.

    Anti-PF4/Heparin antibodies are commonly found in patients with HIT. For use in adult population suspected of HIT. Not for use in isolation to exclude HIT.

    For prescription use only.

    Device Description

    HemosIL CL HIT-IgG(PF4-H) assay is a chemiluminescent two-step immunoassay consisting of magnetic particles coated with PF4 complexed to polyvinyl sulfonate (PVS) which capture, if present, PF4/H antibodies from the sample. After incubation, magnetic separation, and a wash step, a tracer consisting of an isoluminol-labeled anti-human IgG antibody is added and may bind with the captured PF4/H IgG on the particles. After a second incubation, magnetic separation, and a wash step, reagents that trigger the luminescent reaction are added, and the emitted light is measured as relative light units (RLUs) by the ACL TOP 970 CL optical system. The RLUs are directly proportional to the PF4/H IgG concentration in the sample.

    The HemosIL CL HIT-IgG(PF4-H) assay utilizes a 4 Parameter Logistic Curve fit (4PLC) data reduction method to generate a Master Curve. The Master Curve is predefined and lot dependent and it is stored in the instrument through the cartridge barcode. With the measurement of calibrators, the predefined Master Curve is transformed to a new, instrument specific 4PLC Working Curve. The concentration values of the calibrators are included in the reagent kit calibrator value sheet 2D barcode.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the HemosIL CL HIT-IgG(PF4-H) device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance CharacteristicAcceptance Criteria (Implicit)Reported Device Performance (HemosIL CL HIT-IgG(PF4-H))
    PrecisionAs demonstrated by predicateRepeatability (%CV): Controls 3.0-5.2%, Samples 2.6-8.2%
    Within-Laboratory (%CV): Controls 5.9-7.4%, Samples 5.7-10.7%
    Lot-to-Lot VariabilityAs demonstrated by predicateControls 2.0-2.1%, Samples 4.5-14.5%
    ReproducibilityAs demonstrated by predicateTotal Reproducibility (%CV): Controls 5.5-7.6%, Samples 6.2-16.4% (for measurable samples)
    Analytical SensitivityAs demonstrated by predicateLoB: 0.09 U/mL, LoD: 0.14 U/mL
    Analytical SpecificityNo interference at specified concentrationsNo interference for: Hemoglobin (1000 mg/dL), Bilirubin (unconjugated & conjugated 40 mg/dL), Triglycerides (1500 mg/dL), Unfractionated heparin (1.2 IU/mL), LMWH (2.5 IU/mL), HAMA (1 µg/mL), Rheumatoid Factor (160 IU/mL), Acid citric dextrose (0.45 g/dL), Argatroban (1.2 µg/mL), Fondaparinux (0.102 mg/dL), Dabigatran (0.900 mg/dL), Rivaroxaban (0.270 mg/dL), Protamine (5 mg/dL)
    Method Comparison (vs. Predicate)High agreement (e.g., >95%)PPA: 97% (91/94), NPA: 100% (246/247), Total Agreement: 99% (337/341)
    Cut-Off Validation (vs SRA)High agreement (e.g., >95%)98.9% Agreement, 97.8% Negative Percent Agreement, 100.0% Positive Percent Agreement
    Normal Reference RangeEstablished valuesHeparin Exposed, Non-HIT Suspected Patients: Upper Limit 1.42 U/mL (n=132); Healthy Donors: Upper Limit 0.45 U/mL (n=122)
    Intended UseConsistent with predicateMaintained qualitative detection of IgG antibodies to PF4-heparin complexes in 3.2% citrated plasma for adult HIT suspicion.

    2. Sample Size Used for the Test Set and Data Provenance

    • Precision Study: 5 plasma samples and 2 levels of controls. Tested over 20 days.
    • Reproducibility Study: 6 plasma samples. Tested across 3 external sites, twice per day over 5 days with 3 replicates.
    • Analytical Sensitivity (LoD): Not explicitly stated, but assessed using "three different lots" of reagent cartridges.
    • Analytical Specificity: Not explicitly stated, but involved testing with various interfering substances and 24 citrated plasma samples from APS patients.
    • Normal Reference Range Study: 132 Heparin-Exposed, Non-HIT Suspected Patients and 122 Healthy Donors.
    • Cut-Off Validation Study (vs. SRA): 91 citrated plasma samples (45 SRA positive, 46 SRA negative).
    • Method Comparison Study (vs. Predicate): 341 samples from HIT-suspected patients.

    Data Provenance: The document does not explicitly state the country of origin for the patient data. It is implied to be retrospective as the samples were "from HIT-suspected patients" or "patients diagnosed with Antiphospholipid Syndrome (APS)", suggesting they were pre-collected. The reproducibility study explicitly states it was done at "3 external" sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number or qualifications of experts used to establish the ground truth for the test set.

    • For the Cut-Off Validation Study, Serotonin Release Assay (SRA) results were used as the reference standard, indicating a highly specialized laboratory assay.
    • For the Method Comparison Study, the predicate device (HemosIL AcuStar HIT-IgG(PF4-H)) served as the reference standard.
    • For the Normal Reference Range Study, patient classification as "Heparin Exposed, Non-HIT Suspected Patients" or "Healthy Donors" implies a clinical determination, but no expert involvement is specifically detailed.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for the test set or for establishing ground truth. The SRA and predicate device results appear to be taken as the reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This device is an in-vitro diagnostic (IVD) immunoassay, not an imaging or software device that would typically involve human readers. The study focuses on the analytical and clinical performance of the assay itself.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done

    Yes, the studies described are standalone performance evaluations of the HemosIL CL HIT-IgG(PF4-H) assay. The device is a "fully automated, chemiluminescent immunoassay (CIA)" and the performance data reflects its direct measurement capabilities on an ACL TOP 970 CL instrument without explicit human-in-the-loop interpretation beyond standard laboratory procedures for running the assay and reporting results. The device provides a qualitative positive or negative result based on a cut-off.

    7. The Type of Ground Truth Used

    The types of ground truth used include:

    • Serotonin Release Assay (SRA): For the cut-off validation study, which is considered a gold standard for HIT diagnosis.
    • Predicate Device Results (HemosIL AcuStar HIT-IgG(PF4-H)): For the method comparison study, establishing equivalence to a previously cleared device.
    • Clinical Diagnosis/Patient Classification: For the normal reference range study (e.g., "Heparin Exposed, Non-HIT Suspected Patients" and "Healthy Donors") and sample collection for methodology studies (e.g., "HIT-suspected patients", "patients diagnosed with Antiphospholipid Syndrome (APS)").

    8. The Sample Size for the Training Set

    The document does not explicitly describe a separate "training set" for the device. As an IVD immunoassay, the development process typically involves internal optimization and validation studies, but these are not usually structured as a distinct "training set" in the same way as machine learning algorithms. The mentioned studies are primarily for performance validation and substantial equivalence claims. A "Master Curve" is generated for the assay, which is "predefined and lot dependent" and stored in the instrument, indicating calibration and internal standardization but not a "training set" in the common sense for AI/ML.

    9. How the Ground Truth for the Training Set Was Established

    Since an explicit "training set" in the context of AI/ML is not described, the method for establishing its ground truth is not applicable. The "Master Curve" concept implies calibration and validation using known standards and controls, which are part of the assay's design and manufacturing process.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223093
    Date Cleared
    2024-12-17

    (809 days)

    Product Code
    Regulation Number
    866.5660
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Aptiva APS IgG Reagent; Aptiva APS IgM Reagent

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Aptiva APS IgG Reagent is an immunoassay utilizing particle-based multi-analyte technology for the semiquantitative determination of anti-cardiolipin (aCL) and anti-beta 2 glycoprotein 1 (all2GPI) IgG autoantibodies in human serum as an aid in the diagnosis of primary antiphospholipid syndrome (APS), when used in conjunction with other laboratory findings.

    The Aptiva APS IgG Reagent is intended for use with the Aptiva System.

    The Aptiva APS IgM Reagent is an immunoassay utilizing particle-based multi-analyte technology for the semiquantitative determination of anti-cardiolipin (aCL) and anti-beta 2 glycoprotein 1 (aß2GPI) IgM autoantibodies in human serum as an aid in the diagnosis of primary and secondary antiphospholipid syndrome (APS), when used in conjunction with other laboratory findings.

    The Aptiva APS IgM Reagent is intended for use with the Aptiva System.

    Device Description

    The Aptiva APS IgG and Aptiva APS IgM reagent utilize particle based multi-analyte technology (PMAT) in a cartridge format. Each analyte (anti-cardiolipin [aCL] and anti-B2-Glycoprotein I [aB2GPI]) in the Aptiva APS IgG and Aptiva APS IgM reagent is a solid phase immunoassay utilizing fluorescent microparticles. This technology allows each of the two analytes, along with a human IgG or human IgM capture antibody (IgG or IgM Control Microparticle), to be coated onto three uniquely recognizable paramagnetic microparticles, which are combined into one tube.

    The Aptiva instrument is a fully automated, random-access analyzer. This platform is a closed system with continuous load and random-access capabilities that processes the samples, runs the reagent and reports results. It includes liquid handling hardware, optical module (OM), and integrated computer with proprietary software and touch screen user interface.

    The two analyte microparticles, along with the control microparticle, are stored in the reagent cartridge under conditions that proteins in their reactive states. When the assay cartridge is ready to be used for the first time, the reagent tube seals are pierced using the cartridge lid. The reagent cartridge is then loaded onto the Aptiva instrument, where the microparticles are automatically rehydrated using a buffer located within the cartridge.

    The Aptiva System dilutes the sample 1:8, then combines an aliquot of diluted sample, and reagent into a cuyette. The mixture is incubated at 37°C. After a wash cvcle, conjugated antihuman IgG or IdM antibodies are added to the particles and this mixture is incubated at 37°C. Excess conjugate is removed in another wash cycle, and the particles are re-suspended in system fluid.

    Multiple images are generated by the system to identify and count the two (2) unique analyte particles, as well as determine the amount of coniugate on each particle. Coated with goat anti-human lgG or IdM antibodies, is present as a control to flaq low concentrations of IgG or IgM in the sample as an assay verification step. The median fluorescent intensity (MFI) for each analyte is proportional to the concentration of conjugate bound to human IgG or IgM, which is proportional to the concentration of IgG or IgM antibodies bound to the corresponding particle population. The system uses the MFI from at least 50 particles of each population. The identity of the particles is determined by the unique signature of the particles.

    Each analyte in the Aptiva APS IgG Reagent and the Aptiva APS IgM Reagent is assigned a predefined lot specific master curve. The analyte specific master curve is stored on the reagent cartridge RFID label. Based on results obtained by running calibrators (supplied separately), the system creates individual working curves. Working curves are used by the software to calculate Fluorescent Light Units (FLU) for each analyte from the MFI values obtained for each sample.

    Aptiva APS IgG and Aptiva APS IgM Calibrators and Aptiva APS IgG and Aptiva APS IgM Controls are sold separately.

    AI/ML Overview

    The provided text describes the analytical and clinical performance characteristics of the Aptiva APS IgG and Aptiva APS IgM Reagents, which are immunoassays for the semi-quantitative determination of anti-cardiolipin (aCL) and anti-beta 2 glycoprotein 1 (aβ2GPI) IgG/IgM autoantibodies. This information is presented in the context of a 510(k) premarket notification for FDA clearance.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" as a separate, quantified set of thresholds for each performance metric. Instead, it presents the results of various analytical and clinical studies, implying that these results met the internal criteria for substantial equivalence to predicate devices and overall performance claims for an in vitro diagnostic (IVD) device.

    However, we can infer performance targets based on the presented data and the overall context of an FDA submission for an IVD. The primary performance metrics presented are related to precision, detection limits, linearity, interference, and clinical sensitivity/specificity.

    Inferred Acceptance Criteria and Reported Device Performance (Summary)

    Performance CharacteristicInferred Acceptance Criteria (General IVD Expectations)Reported Device Performance (Aptiva APS Reagents)
    PrecisionCV% to be within acceptable ranges for IVD assays, typically lower for higher concentrations and clinically critical ranges.Within-Laboratory (Total Precision) CV%:
    • aCL IgG: 5.6% - 9.5% (generally decreasing with higher FLU)
    • aβ2GPI IgG: 6.9% - 11.7% (generally decreasing with higher FLU)
    • aCL IgM: 4.3% - 9.7% (generally decreasing with higher FLU)
    • aβ2GPI IgM: 5.5% - 10.2% (generally decreasing with higher FLU)

    Between-Site Reproducibility CV% (3 sites):

    • aCL IgG: 5.2% - 9.3%
    • aβ2GPI IgG: 6.4% - 10.0%
    • aCL IgM: 5.4% - 10.0%
    • aβ2GPI IgM: 5.9% - 10.5%

    Between-Lot Reproducibility CV% (3 lots):

    • aCL IgG: 6.6% - 13.3%
    • aβ2GPI IgG: 8.5% - 12.1%
    • aCL IgM: 6.1% - 11.4%
    • aβ2GPI IgM: 6.0% - 10.5% |
      | Limit of Blank (LoB) | Very low, close to zero, ensuring no signal from blank samples. | aCL IgG: 0.00 FLU
      aβ2GPI IgG: 0.02 FLU
      aCL IgM: 0.01 FLU
      aβ2GPI IgM: 0.03 FLU |
      | Limit of Detection (LoD)| Low, indicating ability to detect small amounts of analyte. | aCL IgG: 0.07 FLU
      aβ2GPI IgG: 0.09 FLU
      aCL IgM: 0.04 FLU
      aβ2GPI IgM: 0.06 FLU |
      | Limit of Quantitation (LoQ)| Low, defining the lowest concentration that can be reliably quantified. | aCL IgG: 0.29 FLU
      aβ2GPI IgG: 0.21 FLU
      aCL IgM: 0.06 FLU (set to 0.10 FLU for AMR lower limit)
      aβ2GPI IgM: 0.09 FLU (set to 0.10 FLU for AMR lower limit) |
      | Analytical Measuring Range (AMR)| Wide enough to cover relevant clinical concentrations, with demonstrated linearity. | aCL IgG: 0.29 - 328.94 FLU
      aβ2GPI IgG: 0.21 - 256.70 FLU
      aCL IgM: 0.10 – 114.68 FLU
      aβ2GPI IgM: 0.10 – 95.86 FLU

    Linearity demonstrated across these ranges with R2 values mostly ≥ 0.98. |
    | High Concentration Hook Effect| No hook effect within or above the AMR. | Confirmed no hook effect up to theoretically calculated values: aCL IgG: 2645.36 FLU, aβ2GPI IgG: 1790.48 FLU, aCL IgM: 167.25 FLU, aβ2GPI IgM: 126.13 FLU. |
    | Interference | No significant interference from common endogenous or exogenous substances at specified concentrations. | No interference detected for aCL IgG, aβ2GPI IgG, aCL IgM, and aβ2GPI IgM with tested interferents (bilirubin, hemoglobin, triglycerides, cholesterol, RF IgM, human IgG, ibuprofen, warfarin, prednisone, acetaminophen, aspirin, hydroxychloroquine, omeprazole, simvastatin, heparin) at their respective tested concentrations. Percent recoveries or FLU differences were within acceptable ranges (generally close to 100% recovery for spiked samples, or low FLU difference for negative samples). |
    | Sample Stability | Samples should be stable for specific storage conditions and freeze/thaw cycles. | Samples stable up to 48 hours at room temperature, up to 14 days at 2-8°C, and for up to 5 freeze/thaw cycles. |
    | Reagent Stability | Reagent shelf-life and in-use stability should be established. | Shelf-life: 9 months for Aptiva APS IgG Reagent, 7 months for Aptiva APS IgM Reagent (based on accelerated stability, verified by ongoing real-time studies).
    In-use (onboard) stability: 28 days for both, with 14-day recalibration. |
    | Clinical Sensitivity & Specificity| High sensitivity to detect disease (APS) and high specificity to correctly identify non-disease states (controls/non-APS). | Aptiva APS IgG:

    • aCL IgG: Sensitivity 54.1% (95% CI: 45.3–62.7%), Specificity 99.5% (95% CI: 98.2–99.9%)
    • aβ2GPI IgG: Sensitivity 53.3% (95% CI: 44.5-61.9%), Specificity 99.0% (95% CI: 97.5-99.6%)

    Aptiva APS IgM:

    • aCL IgM: Sensitivity 27.5% (95% CI: 22.7–32.9%), Specificity 97.5% (95% CI: 95.4–98.6%)
    • aβ2GPI IgM: Sensitivity 24.7% (95% CI: 20.1–30.0%), Specificity 98.5% (95% CI: 96.8–99.3%) |
      | Predicate Method Comparison (Percent Agreement)| High agreement with legally marketed predicate devices. | Aptiva APS IgG (aCL IgG) vs. QUANTA Flash aCL IgG: PPA: 81.6%, NPA: 95.7%, TPA: 93.1% (N=202)
      Aptiva APS IgG (aβ2GPI IgG) vs. QUANTA Lite Beta 2GP1 IgG ELISA: PPA: 88.0%, NPA: 89.7%, TPA: 88.9% (N=108)
      Aptiva APS IgM (aCL IgM) vs. QUANTA Flash aCL IgM: PPA: 87.0%, NPA: 90.2%, TPA: 89.8% (N=422)
      Aptiva APS IgM (aβ2GPI IgM) vs. QUANTA Flash β2GPI IgM: PPA: 88.9%, NPA: 84.3%, TPA: 84.8% (N=244) |

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Clinical Performance Test Set Sample Sizes:

      • Aptiva APS IgG (aCL IgG & aβ2GPI IgG): N=526 (122 APS combined, 404 controls/non-APS)
      • Aptiva APS IgM (aCL IgM & aβ2GPI IgM): N=689 (291 APS combined, 398 controls/non-APS)
      • Normal Population for Expected Values: N=200 apparently healthy blood donors.
    • Method Comparison Test Set Sample Sizes:

      • Aptiva APS IgG (aCL IgG vs. QUANTA Flash aCL IgG): N=202
      • Aptiva APS IgG (aβ2GPI IgG vs. QUANTA Lite Beta 2GP1 IgG ELISA): N=108
      • Aptiva APS IgM (aCL IgM vs. QUANTA Flash aCL IgM): N=422
      • Aptiva APS IgM (aβ2GPI IgM vs. QUANTA Flash β2GPI IgM): N=244
    • Analytical Performance Test Set Sample Sizes:

      • Precision: 7 samples for IgG, 7 samples for IgM (80 replicates each for within-lab; 75 replicates each for between-site/lot reproducibility from multiple sites/lots).
      • LoB/LoD/LoQ: Blanks (LoB: 4 samples, 60 data points/lot); Low-level samples (LoD/LoQ: 4 samples, 120 data points/assay/lot).
      • Interference: 6 human specimens (negative, cutoff, positive) for each analyte, spiked with various interferents and tested in 5 replicates.
      • Sample Stability: 5 serum samples (IgG), 5 serum samples (IgM) tested in duplicates over time/cycles.
      • In-use Stability: 11 samples (IgG), 7 samples (IgM) tested periodically.
    • Data Provenance: The document states that a "cohort of characterized samples, none of which were used for establishing the reference range, was used to validate the clinical performance." It does not explicitly state the country of origin of the data or whether the studies were retrospective or prospective. However, for a 510(k) submission, clinical validation studies typically involve retrospective or prospectively collected clinical samples, but the exact nature (e.g., specific clinical sites, patient populations beyond disease groups) and geographic origin are not detailed here. The studies were likely conducted within a controlled laboratory setting by the manufacturer (Inova Diagnostics, Inc. in San Diego, CA) or its affiliates, using sourced human serum samples.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The establishment of "ground truth" for IVD devices like these typically relies on well-characterized clinical samples and established diagnostic criteria for the disease (Antiphospholipid Syndrome - APS).

    • The document states that the clinical performance validation was performed using "a cohort of characterized samples." The characterization of these samples (i.e., whether they definitively represent APS or control) would serve as the ground truth.
    • However, the document does not specify the number of experts or their qualifications (e.g., rheumatologists, clinical immunologists/pathologists) who established the diagnostic status (ground truth) of the clinical samples (APS vs. control) used in the clinical sensitivity and specificity studies. It is implied that these were "characterized samples," meaning their disease status was determined by established clinical and laboratory criteria, likely involving clinical consensus or previous diagnoses.
    • For cut-off establishment, the reference population included "apparently healthy subjects," and the "internal APS samples (data not provided)" and "distribution of result values of healthy controls" were used. This suggests clinical characterization of these samples.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The concept of "adjudication method" (like 2+1 or 3+1) is typically relevant for interpretative tasks, such as reading medical images, where multiple human readers interpret the same data and their interpretations need to be reconciled to establish a ground truth.

    For these types of IVD assays, ground truth for clinical performance is established based on the clinical diagnosis of the patient from whom the sample was collected. This diagnosis is usually a culmination of clinical findings, established criteria (e.g., the revised Sapporo criteria for APS), and other laboratory tests, rather than an "adjudication" of multiple independent interpretations of the test results themselves.

    Therefore, the document does not mention any adjudication method in the context of establishing ground truth for the test samples, as it's not applicable in the same way it would be for an AI-medical imaging device. The "ground truth" for the samples (APS vs. non-APS) is assumed to be pre-established clinical diagnosis.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was conducted or is applicable here.

    This device is an in vitro diagnostic (IVD) immunoassay, not an AI-powered image analysis or diagnostic aid that assists human readers (e.g., radiologists interpreting images). The device directly measures biomarker levels in a sample, and its output is a quantitative value (FLU) which then determines a semi-quantitative result (Positive/Negative/Indeterminate based on cut-offs). Human "readers" (laboratory personnel) operate the instrument and interpret the final quantitative results based on predefined cut-offs, but they are not subjectively interpreting complex data that AI would assist with, in the sense of an MRMC study.

    Therefore, an MRMC comparative effectiveness study, and an effect size related to human reader improvement with AI assistance, are not relevant for this type of device and are not mentioned in the document.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This is a "standalone" device in terms of its core functionality, but the term "algorithm only" or "human-in-the-loop" isn't directly analogous.

    • The Aptiva System is a fully automated, random-access analyzer (page 6). This means the instrument itself, with its integrated software and optical module, processes the samples, runs the reagents, and reports results independently after the sample is loaded and the assay initiated.
    • The "performance" described here (sensitivity, specificity, precision, linearity, etc.) is the device's performance (including its internal algorithms and mechanics) in generating quantitative results. There isn't a separate "algorithm only" performance that needs to be differentiated from a "human-in-the-loop" performance, because the device is the automated system determining the FLU values. The human interaction is primarily in sample loading, reagent handling, and result review/reporting, not in interpreting raw data that the device itself would also interpret in an unassisted mode.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the clinical performance studies (sensitivity and specificity) was established based on "characterized samples" representing patients with Antiphospholipid Syndrome (APS) and various control groups (patients with other autoimmune/infectious diseases, apparently healthy subjects).

    While the document doesn't explicitly state "expert consensus," it's highly implied that the "characterization" of these samples as APS or control would be based on:

    • Clinical findings: Presenting symptoms, medical history.
    • Other laboratory findings: Beyond the novel antibodies, other relevant diagnostic tests.
    • Established diagnostic criteria: For APS, this would typically be the revised Sapporo classification criteria, which combine clinical and laboratory criteria.

    So, it's a combination of established clinical diagnoses and potentially other laboratory data, which implicitly would involve the consensus or findings of medical experts involved in patient diagnosis. It is not based on pathology (e.g., tissue biopsy) or outcomes data (e.g., long-term disease progression as the sole ground truth).

    8. The sample size for the training set

    The document describes the submission of a "new device" and its performance characteristics. It does not explicitly mention or quantify a "training set" in the context of machine learning.

    For IVD devices, a "training set" isn't a standard concept unless the device incorporates adaptive algorithms or AI that learns from data. In this case, the device is an immunoassay with predefined master curves and calibrated reagents. The master curves are generated "at Inova for each reagent lot, where in-house Master Curve Standards with assigned FLU values are run multiple times." These "in-house Master Curve Standards" could be considered analogous to a "training" or calibration process, but it's not a dataset for training a generalized AI model but rather for calibrating each reagent lot of a classical assay.

    The sample sizes provided in the document are for:

    • Analytical performance (precision, LoB/LoD/LoQ, linearity, interference, stability).
    • Clinical validation (sensitivity/specificity studies).
    • Method comparison studies.
    • Reference range establishment.

    None of these are explicitly labeled as a "training set."

    9. How the ground truth for the training set was established

    As there is no explicitly defined "training set" for a machine learning model, the concept of establishing ground truth for such a set is not applicable here.

    However, if we consider the "Master Curve Standards" as analogous to calibration/training data for the assay, their "ground truth" (assigned FLU values) would be established through a rigorous internal process by the manufacturer (Inova Diagnostics) based on:

    • Carefully prepared and characterized aliquots (standards) with known or assigned concentrations of the target antibodies.
    • Repeat measurements and statistical analysis for consistent and accurate assignment of FLU values.
    • This is a standard practice for calibrating quantitative IVD assays, ensuring the device outputs accurate and traceable results.
    Ask a Question

    Ask a specific question about this device

    K Number
    K233367
    Date Cleared
    2024-08-12

    (315 days)

    Product Code
    Regulation Number
    866.3830
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    iDart Lyme IgG ImmunoBlot Kit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The iDart™ Lyme IgG ImmunoBlot Kit is an immunoblot assay intended for the in vitro qualitative detection of IgG antibodies to Borrelia burgdorferi in human serum. The iDart Lyme IgG ImmunoBlot Kit is intended to detect antibodies to LSA and multiple other B. burgdorferi antigens following a modified two-tier test methodology. Positive results from the iDart Lyme IgG ImmunoBlot Kit are supportive evidence for the presence of antibodies and exposure to B. burgdorferi. Negative results do not preclude infection with B. burgdorferi. iDart™ Lyme IgG ImmunoBlot Kit is intended to aid in the diagnosis of Lyme disease and the test kit should only be used on samples from patients with clinical history, signs and symptoms consistent with Lyme disease. The iDart Lyme IgG Immunoblot Kit is not intended as a screen for asymptomatic patients.

    Test results are to be used in conjunction with information obtained from the patient's clinical evaluation and other diagnostic procedures.

    For in vitro diagnostic use only
    For professional use only
    For prescription use only

    Device Description

    The iDart™ Lyme IgG ImmunoBlot tests are line immunoblot assays. Antigenic proteins specific for Borrelia species that cause Lyme Disease are produced by recombinant DNA technology in Escherichia coli. The purified proteins are then applied as discrete lines on a nitrocellulose membrane along with two control proteins.

    The iDart™ Lyme IgG ImmunoBlot Kit contains IgG ImmunoBlot strips and the proteins are applied in the following order: C1 (lgG/lgM - conjugate control), C2 (Protein L - calibrator/serum control), P93, P41, P39, P23, P31, P66, P58, P45, P34, P30, P28, P18 and LSA (a chimeric VISE peptide termed the Lyme Screen Antigen).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the iDart™ Lyme IgG ImmunoBlot Kit are primarily demonstrated through its analytical performance (reproducibility, analytical specificity, cross-reactivity, interference) and clinical performance (method comparison with STTT, clinical sensitivity/specificity using a CDC panel).

    Acceptance Criteria CategorySpecific Metric / EvaluationAcceptance Threshold (Implied/Explicit)Reported Device Performance (as stated)
    Analytical Performance
    ReproducibilityAgreement across sites, operators, runs, days100% agreement expected100% agreement of all bands among all runs, all days and across 3 sites for negative, moderate negative, high negative, moderate positive, and high positive samples (Table 1).
    Analytical Specificity (Endemic)Specificity in healthy individuals from endemic areasHigh specificity99.36% (2 false positives out of 313 samples from CDC and Bay Area Lyme Foundation) (Table 2).
    Analytical Specificity (Non-Endemic)Specificity in healthy individuals from non-endemic areasHigh specificity100% (0 false positives out of 112 samples from CDC and CA) (Table 3).
    Cross-ReactivityFalse positivity with various conditionsLow/no cross-reactivity100% specificity for LSA, 98.67% specificity for ≥2 bands, and 100% specificity for IgG Positive across 376 potentially cross-reactive samples from various disease states and infections (Table 4). Minor false positives (5 for ≥2 bands) were noted but resulted in 0% IgG positive or only a single band out of the two required for positivity.
    InterferenceEffect of endogenous analytesNo interferenceNo interference observed for bilirubin, albumin, cholesterol, triglycerides, and hemoglobin at specified low and high concentrations on positive, low positive, and negative Borrelia IgG samples (Table 5).
    Clinical Performance
    Method Comparison (STTT)Positive Percent Agreement (PPA) with STTTHigh PPA and NPABay Area Lyme Foundation (n=290): PPA: 95.00% (95% CI: 76.39% – 99.11%), NPA: 86.67% (95% CI: 82.09% – 90.21%) (Table 7)
    IGeneX Inc. Cohort 2 (n=248): PPA: 95.00% (95% CI: 89.52% – 97.69%), NPA: 90.63% (95% CI: 84.33% - 94.56%) (Table 8)
    IGeneX Inc. Cohort 3 (n=230): PPA: 90.91% (95% CI: 62.27% – 98.38%), NPA: 96.80% (95% CI: 93.55% – 98.44%) (Table 9)
    Clinical SensitivityPerformance against CDC Reference PanelHigh sensitivity for later stagesStage I: 58.33% (higher than STTT at 30.00%)
    Stage II: 90.00% (equal to STTT)
    Stage III: 100% (equal to STTT)
    Overall: 71.11% (higher than STTT at 52.22%) (Table 10).
    Clinical SpecificityPerformance against CDC Reference PanelHigh specificityHealthy controls: 100% (equal to STTT)
    Disease Controls: 100% (equal to STTT) (Table 10).
    Fresh and Frozen Sample ComparabilityConsistent results between fresh and frozen samplesConsistent resultsAll IgG positive samples remained positive and all negative samples remained negative after freezing (Table 11).
    Antibody Class SpecificitySpecificity of anti-human IgG conjugateSpecific to IgGAll positive samples tested without treatment or with human IgM remained positive, and all negative samples remained negative. When treated with human IgG, all positive samples became negative, confirming specificity (Table 12).

    2. Sample Size Used for the Test Set and Data Provenance

    • Reproducibility: 90 samples for each of the 6 sample types (High Positive, Moderate Positive, Negative-1, Negative-2, Negative-3, Low Positive). Total of 540 tests performed across 3 sites, 2 operators, 5 days, 2 runs/day. Data provenance is not explicitly stated beyond "blinded and coded samples."
    • Analytical Specificity (Healthy Individuals):
      • 313 samples from endemic areas (CDC, Bay Area Lyme Foundation - NY, MA, WI).
      • 112 samples from non-endemic areas (CDC, CA).
    • Cross-Reactivity Study: 376 samples from various disease states/infections (CDC, IGeneX (CA), New York Biological (NY), BEI, Kamineni Life Sciences Pvt. Ltd, Hydrabad (India), Warde Medical Laboratory (MI)).
    • Interference Study: One positive, one low positive, and one negative Borrelia IgG sample were used for each interference agent and concentration, tested in singlicate.
    • Clinical Studies (Method Comparison with STTT): A total of 768 serum samples.
      • Site 1: 290 clinical serum samples from Bay Area Lyme Foundation.
      • Site 2: 37 clinical serum samples (Cohort 2) + 230 clinical serum samples (Cohort 3) from IGeneX Inc.
      • Site 3: 211 clinical serum samples (Cohort 2) from IGeneX Inc.
      • Data provenance: "procured from two vendors" (Bay Area Lyme Foundation and IGeneX Inc.). Samples were "prospective banked samples" or "clinical serum samples."
    • Clinical Sensitivity/Specificity (CDC Serum Panel): 280 serum samples from CDC (patients with Lyme disease at different stages, look-alike conditions, healthy controls from endemic and non-endemic regions).
    • Fresh and Frozen Samples Comparison Study: 72 decoded left-over patient serum samples.
    • Antibody Class Specificity: 10 previously tested patient samples (6 negatives, 4 positives).

    All clinical samples were "blinded, re-coded."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state that experts were used to establish ground truth for the test set derived from clinical samples. Instead, the ground truth for these clinical performance studies appears to be based on:

    • STTT (Standard Two-Tier Test Methodology): This is a laboratory-based diagnostic algorithm involving an EIA/IFA screen followed by an immunoblot. Its results serve as the comparator (ground truth) for the method comparison study.
    • CDC Reference Panel: For the clinical sensitivity/specificity, the CDC reference panel implicitly has established diagnoses for Lyme disease stages and other conditions. The process by which CDC established these diagnoses (e.g., expert consensus, other gold standards) is not detailed here.

    For the reproducibility study, the samples were "blinded and coded" with "expected result" (e.g., High Positive, Negative). It's not specified how these initial categorizations were established.

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method involving multiple experts resolving discrepancies for the test set results. The evaluations are primarily against established comparators like STTT or reference panels (CDC). For the reproducibility study, the agreement was 100%, so no adjudication would have been required for discrepancies.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The performance studies focus on the device's standalone accuracy against existing diagnostic methods/reference panels, rather than how human readers' performance might improve with or without AI assistance. The device is an ImmunoBlot Kit, not an AI-assisted diagnostic imaging or interpretation system.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

    Yes, the studies reflect the standalone performance of the iDart™ Lyme IgG ImmunoBlot Kit. The "Principle of procedures" describes manual interpretation ("A strip reading guide included in each test kit shows the location of specific antigens in the test strip. ... Any band found having a visual intensity equal to or greater than the C2 control band intensity is considered as a significant (positive) band. Depending on the observed bands pattern, one can interpret the presence or absence of Lyme specific IgG antibodies in the patient serum."). The "Result Generation" for the device is listed as "Manual reading" in the comparison table. This indicates the studies assess the kit's performance as a laboratory test, interpreted by a human, but without a "human-in-the-loop" AI system.

    7. The Type of Ground Truth Used

    • Clinical Performance (Method Comparison): The "ground truth" was established by Standard Two-Tier Test Methodology (STTT), which involves FDA-cleared EIA and immunoblot tests, performed by laboratory personnel.
    • Clinical Sensitivity/Specificity: The "ground truth" was established by the CDC Reference Panel, which represents diagnosed cases of Lyme disease at various stages, look-alike conditions, and healthy controls. The methods for establishing these CDC diagnoses are not detailed but likely involve a combination of clinical assessment and established laboratory criteria.
    • Analytical Specificity / Cross-reactivity: Ground truth for these samples was "known to contain potentially cross-reactive antibodies to Lyme infection" or "healthy individuals" or specific disease states, implying prior clinical diagnoses or sample characterization.
    • Reproducibility: Samples were initially characterized as "High Positive," "Negative," etc., which served as the expected result for comparison. The method for this initial characterization is not specified.
    • Fresh/Frozen & Antibody Class Specificity: Ground truth was based on "previously tested patient samples" with known IgG status.

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of machine learning or AI. The iDart™ Lyme IgG ImmunoBlot Kit is an immunoassay kit, not an AI-based device that requires model training. Therefore, this question is not applicable to the information provided.

    9. How the Ground Truth for the Training Set Was Established

    As the device is not an AI/ML product, there is no "training set" or corresponding ground truth establishment process described in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233605
    Manufacturer
    Date Cleared
    2024-08-07

    (272 days)

    Product Code
    Regulation Number
    866.3235
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ADVIA Centaur EBV-EBNA IgG

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ADVIA Centaur EBV-EBNA IgG (EBVnaG) assay is for in vitro diagnostic use in the qualitative detection of IqG antibodies to Epstein-Barr virus (EBV) nuclear antigen (EBNA) in human pediatric (2-21 years old) and adult serum and plasma (EDTA and lithium heparin) using the ADVIA Centaur XP system. When used in conjunction with other EBV markers, this assay is intended for use as an aid in the diagnosis of Epstein-Barr virus infection, such as infectious mononucleosis.

    Device Description

    Not Found

    AI/ML Overview

    The provided text describes the performance of the ADVIA Centaur EBV-EBNA IgG assay, an in vitro diagnostic device for the qualitative detection of IgG antibodies to Epstein-Barr virus (EBV) nuclear antigen (EBNA).

    Here's an analysis of the acceptance criteria and study data:

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the ADVIA Centaur EBV-EBNA IgG assay are primarily demonstrated through its agreement with an FDA-cleared reference EBV EBNA IgG assay. While explicit "acceptance criteria" are not listed with numerical thresholds in a dedicated table, the clinical study results (Positive Percent Agreement - PPA and Negative Percent Agreement - NPA) are implicitly compared against an expectation of substantial equivalence to the predicate device.

    For precision and reproducibility, specific targets are mentioned.

    Performance MetricAcceptance Criteria (Implied/Stated)Reported Device Performance
    Clinical StudySubstantial Equivalence to Predicate Device (LIAISON EBNA IgG)Population 1 (Total Study Population - symptomatic individuals)
    • NPA: 92.8% (95% CI: 89.6% - 95.1%)
    • PPA: 99.4% (95% CI: 98.8% - 99.7%)

    Population 2 (Known EBV EBNA IgG negative individuals)

    • NPA: 98.2% (95% CI: 94.9% - 99.4%)

    Pediatric Population (Population 1 - symptomatic)

    • NPA: 97.3% (95% CI: 94.3% - 98.8%)
    • PPA: 98.4% (95% CI: 96.0% - 99.4%)

    Pediatric Population (Population 2 - known negative)

    • NPA: 100% (95% CI: 94.9% - 100%) |
      | Precision | Not explicitly stated as a single value, but individual sample CVs are presented. | Serum Samples: Total Precision CVs range from 4.4% to 13.3%.
      Plasma, EDTA Samples: Total Precision CVs range from 4.3% to 9.5%.
      Controls: Control 1 (0.32 Index) Total Precision SD 0.014; Control 2 (3.16 Index) Total Precision CV 4.1%. |
      | Reproducibility | - Concentration $\le$ 0.80 Index: N/A (for CV)
    • Concentration > 0.80 Index: $\le$ 20.0% CV | Serum Samples:
    • Serum A (0.77 Index): SD 0.048, CV N/A (within $\le$ 0.80 Index)
    • Serum B-E (1.07 to 8.85 Index): CVs range from 4.0% to 9.8% (all $\le$ 20.0%)
      Plasma, EDTA Samples:
    • Plasma A (0.78 Index): SD 0.044, CV N/A (within $\le$ 0.80 Index)
    • Plasma B-E (1.06 to 8.75 Index): CVs range from 5.0% to 10.7% (all $\le$ 20.0%)
      Controls: Control 1 (0.36 Index) SD 0.022, CV N/A; Control 2 (3.31 Index) CV 4.0%. |
      | Specimen Equivalency | Regression equation close to y=x, high correlation coefficient (r). | EDTA Plasma vs. Serum: y = 1.00x - 0.01 Index; r = 1.00
      Lithium Heparin Plasma vs. Serum: y = 0.98x - 0.01 Index; r = 0.99
      Conclusion: EDTA Plasma and Lithium Heparin Plasma are equivalent matrixes to Serum. |
      | Interferences | ±10% bias for reactive samples and ±0.10 Index for nonreactive samples. | The listed substances (Hemoglobin, Bilirubin, Lipemia, Biotin, Cholesterol, Protein, etc.) do not interfere at the indicated concentrations. |
      | Cross-reactivity | Not explicitly stated as a numerical criterion, but demonstrated by testing against various viral antibodies and disease states. | Data presented for 330 samples across 29 clinical categories, showing agreement or defined discrepancies with the comparative assay. For HCV, Mycoplasma pneumoniae IgG, HSV-1 IgG, and HSV-2 IgG, possible cross-reactivity cannot be excluded for a few discordant samples and should be interpreted clinically. |
      | Onboard Stability | 28 days for reagents, 28 days for calibration. | Reagents: 28 days. Calibration: 28 days. |
      | Calibrator Stability (Opened Vial) | 60 days when stored at 2-8ºC. | 60 days when stored at 2-8ºC. |
      | Unopened Reagents/Calibrators Stability | Until expiration date when stored at 2-8°C. | Until expiration date when stored at 2-8°C. |

    2. Sample Size and Data Provenance

    Clinical Study (Test Set):

    • Total Study Population (Population 1): 1428 leftover samples.
      • Provenance: Collected over a contiguous time period from individuals for whom an EBV test was ordered. The document does not specify the country of origin but implies a clinical setting ("symptoms and signs for whom an EBV antibody test was ordered"). It is a retrospective collection of leftover samples.
    • Known EBV EBNA IgG Negative Population (Population 2): 167 samples.
      • Provenance: Samples with a known EBV EBNA IgG negative result, used to supplement numbers for negative EBV EBNA IgG. Retrospective.
    • Pediatric Population: Subsets of Population 1 (479 samples including 84 unclassified serostatus individuals) and Population 2 (72 samples including 6 unclassified serostatus individuals).
    • Cross-reactivity Study: 330 samples across various clinical categories.
    • Specimen Equivalency Study: 97-98 sets of matched samples (SST, EDTA plasma, lithium heparin plasma).
      • Provenance: Commercial sources.

    3. Number of Experts and Qualifications for Ground Truth

    The document explicitly states that the ground truth for the clinical study was established by an "FDA cleared EBV EBNA IgG reference assay."
    It also states: "Equivocal reference assay results were resolved by 2 other comparative assays."

    • Number of 'Experts' (resolving assays): 2 (for equivocal cases from the primary reference assay).
    • "Qualifications" of these 'experts': These were "comparative assays" rather than human experts. The document does not specify if these were other FDA-cleared assays or the nature of their qualification, but the implication is that they served as a consensus mechanism to resolve indeterminate results from the primary reference assay.

    4. Adjudication Method for the Test Set

    The adjudication method for equivocal results from the primary reference assay was by "2 other comparative assays." This is a form of 2+0 or 1+2 (primary reference + 2 comparative). If the two comparative assays agreed, that likely established the reconciled ground truth for the equivocal samples. The document does not describe what happened if the two comparative assays disagreed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done. This device is an in vitro diagnostic assay, which typically does not involve human readers interpreting images or data alongside AI. The device is evaluated for its analytical and clinical performance against a reference method. Therefore, there is no information on how much human readers improve with AI vs. without AI assistance.

    6. Standalone (Algorithm only without human-in-the loop performance)

    Yes, a standalone performance study was done. The entire clinical study, precision, reproducibility, specimen equivalency, and interference studies evaluate the performance of the ADVIA Centaur EBV-EBNA IgG assay as a standalone device (algorithm/assay only) against a reference method or predetermined analytical specifications. There is no human-in-the-loop component described for its primary intended use and evaluation.

    7. Type of Ground Truth Used

    The ground truth used for the clinical study was based on an FDA-cleared EBV EBNA IgG reference assay, with equivocal results resolved by 2 other comparative assays. This indicates a reference method or comparative assay-based ground truth.

    8. Sample Size for the Training Set

    The document is a 510(k) summary for an in vitro diagnostic assay. It does not provide information regarding a "training set" in the context of machine learning or AI models.
    The samples mentioned are for performance evaluation (clinical study, precision, etc.) and are analogous to test or validation sets. For IVD devices, a "training set" might refer to samples used during the development and optimization of the assay's reagents and parameters, but this information is not typically disclosed in a 510(k) summary in this format.

    9. How the Ground Truth for the Training Set Was Established

    As there is no "training set" described in the context of an AI/ML model for this IVD assay according to the provided document, this information is not applicable and not available.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233663
    Date Cleared
    2023-12-13

    (28 days)

    Product Code
    Regulation Number
    866.5510
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    N Antisera to Human Immunoglobulins (IgG, IgA, and IgM)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    In-vitro diagnostic reagents for the quantitative determination of immunoglobulins (IgG. IgA and IgM) in human serum, heparinized and EDTA plasma, and IgG in human urine and cerebrospinal fluid (CSF) by means of immunonephelometry on the BN II and BN ProSpec® System. Measurements of IgG aid in the diagnosis of abnormal protein metabolism and the body's lack of ability to resist infectious agents.

    Device Description

    The N Antiserum to Human IgG reagent containing animal serum, produced by immunization of rabbits with highly purified human immunoglobulin (

    AI/ML Overview

    The provided text describes a special 510(k) premarket notification for a modified device, "N Antisera to Human Immunoglobulins (IgG, IgA, and IgM)". The sole modification is the addition of a High Dose Hook (HDH) effect claim for IgG in cerebrospinal fluid (CSF) samples.

    Here's a breakdown of the requested information based on the provided document:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Minimum High Dose Hook limit for CSF samplesHigh Dose Hook limit shown for all three lots up to the maximum measured concentration of 1130 mg/L
    Adherence to a minimum HDH limit of up to 412 mg/LExceeded; the device demonstrated an HDH limit of 1130 mg/L

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: The study used a "CSF high sample pool." The exact number of individual patient samples contributing to this pool (if multiple were pooled) is not specified. However, the study involved a dilution scheme with twelve (12) individual dilution levels, including the neat sample.
    • Data Provenance: Not explicitly stated. The manufacturer is Siemens Healthcare Diagnostics Products GmbH, located in Marburg, Germany, which suggests the study was likely conducted in Germany or a similar geographic region. It is implicitly a prospective study designed to evaluate the HDH effect for the modified device.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This section is not applicable as the device is an in-vitro diagnostic reagent for quantitative determination, not an imaging or interpretive device that would typically require expert ground truth establishment in the described manner. The "ground truth" for this type of test is the quantitatively measured concentration of IgG in a sample, established through laboratory methods and comparison to known standards.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This section is not applicable. Adjudication methods like 2+1 or 3+1 are typically used for establishing ground truth in interpretive studies (e.g., radiologists reviewing images). For a quantitative in-vitro diagnostic test, the "ground truth" is determined by the analytical method itself against known standards.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This is not applicable. The device is an in-vitro diagnostic reagent, not an AI-assisted diagnostic tool or an imaging device requiring human reader interpretation. No MRMC study was performed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This refers to the performance of the analytical system (N Antisera reagent on BN II System) without human intervention in the measurement process. The HDH study performed is a standalone performance evaluation of the reagent/instrument system. The acceptance criteria and performance data in the table above demonstrate this standalone performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For this in-vitro diagnostic device, the "ground truth" for the High Dose Hook study was established by creating known dilutions of a high-concentration CSF sample, which were then measured by the device. The reported concentrations for these dilutions serve as the reference. The ultimate analytical ground truth for the quantitative measurement itself is tied to international standards like ERM-DA470k/IFCC (as stated in the "Traceability/Standardization" section).

    8. The sample size for the training set

    This document does not describe the development or training of a machine learning model, so there is no training set in the typical sense. The "training" for such a diagnostic test involves method development, optimization, and validation using various samples and controls, but these are not referred to as a "training set" for an algorithm.

    9. How the ground truth for the training set was established

    As there is no training set for an algorithm, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231214
    Manufacturer
    Date Cleared
    2023-10-27

    (182 days)

    Product Code
    Regulation Number
    866.3900
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    LIAISON VZV IgG HT, LIAISON Control VZV IgG HT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The LIAISON® VZV IgG HT assay uses chemiluminescent immunoassay (CLIA) technology for the in vitro qualitative detection of specific IgG antibodies to varicella-zoster virus (VZV) in human serum (with gel and without gel-SST), dipotassium EDTA (K2- EDTA), lithium heparin and sodium heparin plasma samples. This assay is intended as an aid in the determination of previous infection of varicella- zoster virus. The test must be performed on the LIAISON® XL Analyzer. The assay performance in detecting antibodies to VZV in individuals vaccinated with the FDA-licensed VZV vaccine is unknown. The user of this assay is responsible for establishing the performance characteristics with VZV vaccinated individuals.

    Device Description

    The LIAISON® VZV IgG HT is an indirect chemiluminescence immunoassay (CLIA) for qualitative detection of specific IgG antibodies to varicella-zoster virus in human serum and plasma.

    The LIAISON® Control VZV IgG HT are liquid ready-to-use controls based in human serum and plasma. The negative control is intended to provide an assay response characteristic of negative patient specimens and the positive control is intended to provide an assay response characteristic of positive patient specimens.

    The assay and controls are designed for use with DiaSorin LIAISON® analyzer family

    AI/ML Overview

    Here's an analysis of the provided text regarding the DiaSorin LIAISON® VZV IgG HT device, focusing on acceptance criteria and supporting study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for this device are primarily expressed as Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to a predicate device, as well as satisfactory performance in interference, cross-reactivity, precision, and high-dose saturation studies.

    Acceptance CriterionRequirement/Goal (Implied or Stated)Reported Device Performance
    Clinical Agreement (vs. Predicate):
    Known Positive Specimens: PPAHigh agreement, ideally >95% (common for diagnostic assays)99.2% (123/124); 95% CI (95.6%-99.9%)
    Known Positive Specimens: NPAHigh agreement (common for diagnostic assays)100% (1/1); 95% CI (20.7%-100%)
    Known Negative Specimens: PPALow false positive rate, ideally 95%97.9% (190/194); 95% CI (94.8%-99.2%)
    Normal Lab Routine Specimens: PPAHigh agreement, ideally >95%97.4% (556/571); 95% CI (95.7%-98.4%)
    Normal Lab Routine Specimens: NPAHigh agreement, ideally >95%98.2% (503/512); 95% CI (96.7%-99.1%)
    Pregnant Women: PPAHigh agreement, ideally >95%98.2% (108/110); 95% CI (93.6%-99.5%)
    Pregnant Women: NPAHigh agreement, ideally >95%96.0% (24/25); 95% CI (80.5%-99.3%)
    Potential Interfering Substances:No interference at specified concentrations for listed endogenous and exogenous substancesNo interference observed for all listed substances at specified concentrations.
    Potential Cross-Reactivity:No false positives from antibodies to other common infectious agents or medical conditionsNo reactive results for any of the 226 tested cross-reactive samples (0/226).
    Precision (Within-Laboratory):Acceptable variability (SD and CV%) for negative, near cut-off, low positive, and positive samplesCV% ranges from 1.8% to 23.5% (Total column). Lower for positive controls/samples, higher for negative controls.
    Reproducibility (Multi-site):Acceptable variability (SD and CV%) across different sites and daysCV% ranges from 3.2% to 13.0% (Reproducibility column). Lower for positive samples, higher for negative control.
    High-dose saturation effect:No misclassification or underestimation of high-titer samplesNo sample misclassification and no high-dose saturation effect observed.
    Analytical sensitivity:Defined sensitivity at cutoff152.4 mIU/mL at cutoff level (1.0 S/CO)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Total Clinical Agreement Study: 1544 clinical human serum samples (1543 used in analysis due to one sample with insufficient volume).
      • Breakdown: 125 known positive, 200 known negative, 135 pregnant women, and 1084 routine lab specimens.
      • Specific sub-studies:
        • Interfering Substances: Not specified, but involved VZV IgG antibody negative, around the cut-off, low positive, and high positive samples.
        • Cross-Reactivity: 226 samples from various conditions.
        • Precision (Within-Lab): 7 samples (panel of coded samples) tested 240 times each.
        • Reproducibility (Multi-site): 7 samples tested 90 times each across sites.
        • High-dose saturation: 3 high-titer samples.
        • Analytical sensitivity: Not a sample size of patient specimens, but derived from serial dilutions of WHO International Standard on 3 assay lots.
    • Data Provenance: The general clinical samples were collected within the United States. The study was prospective in execution as it involved testing these samples with the new device and comparing them to a predicate, conducted at three independent external laboratories.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used and their qualifications for establishing the ground truth of the test set.

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (like 2+1, 3+1). The "ground truth" for the clinical agreement study appears to be defined by the results of the FDA cleared predicate device (LIAISON® VZV IgG, K150375), which is referred to as the "comparator." It notes that "Specimens which were repeatedly equivocal by the predicate device were graded against the performance of the LIAISON® VZV IgG HT assay which does not have an equivocal zone." This implies a direct comparison to the predicate's results rather than an independent expert adjudication process for the clinical samples. For cross-reactivity, samples were "pre-screened with another commercially available VZV IgG assay" and then confirmed for the presence of potential cross-reactants using "US-marked assays."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This device is an automated in vitro diagnostic assay (CLIA technology) for qualitative detection of antibodies, not an imaging device requiring human reader interpretation or AI assistance in the human-in-the-loop context.

    6. Standalone (Algorithm Only) Performance Study

    Yes, the entire clinical performance evaluation described (Clinical Agreement, Interfering Substances, Cross-Reactivity, Precision, Reproducibility, High-dose saturation, Analytical Sensitivity) is essentially a standalone algorithm-only performance study. The LIAISON® VZV IgG HT assay is an automated system run on the LIAISON® XL Analyzer, meaning its performance is evaluated without human interpretation of results beyond reading the automated output.

    7. Type of Ground Truth Used

    The primary ground truth for the clinical agreement study was established by the FDA cleared predicate device (LIAISON® VZV IgG, K150375). For the "known positive" and "known negative" specimens, their status was pre-determined, likely by previous clinical diagnosis or established VZV serology results (though the exact method for this is not detailed beyond being "known"). For cross-reactivity studies, ground truth was based on positive results from "US-marked assays" for the specific cross-reacting agent.

    8. Sample Size for the Training Set

    The document does not specify a training set sample size. This is typical for in vitro diagnostic (IVD) assays like this one. While there is an "algorithm" (the CLIA technology and interpretation logic), it's not a machine learning model that undergoes a separate training phase with a distinct dataset in the way a medical imaging AI would. The "development" and "optimization" of such assays usually happen using internal samples and established chemical/biological principles, not a formalized, reported training set size like in AI/ML submissions.

    9. How the Ground Truth for the Training Set Was Established

    Since a formalized "training set" for a machine learning algorithm isn't explicitly mentioned or directly applicable in the typical sense for this type of IVD, the concept of establishing ground truth for it is also not directly addressed. The assay's performance characteristics are developed and validated based on its underlying chemical and biological reactions and internal testing, which ensures it correctly identifies VZV IgG antibodies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212769
    Date Cleared
    2023-09-29

    (759 days)

    Product Code
    Regulation Number
    866.3510
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DYNEX SmartPLEX MMRV IgG Assay Kit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DYNEX SmartPLEX MMRV IgG Assay Kit is a multiplex immunoassay intended for the qualitative detection of IgG antibodies to Measles, Mumps, Rubella, and Varicella-Zoster Virus (VZV) in human serum. The DYNEX SmartPLEX MMRV IgG Assay Kit is intended for use with the DYNEX Multiplier Analyzer.

    The DYNEX SmartPLEX MMRV IgG Assay Kit is intended to be used as an aid in the determination of serological status to Measles, Mumps, Rubella, and Varicella-Zoster Virus (VZV) in human serum from adults and pediatrics age above 1 year. This kit is not intended for screening blood or plasma donors.

    The performance of this device has not been established for use in neonates, pediative patients below 1 year of age, and immunocompromised patients, or for use at point of care facilities.

    Device Description

    The DYNEX SmartPLEX MMRV IgG Assay Kit (SmartPLEX MMRV IgG Assay) uses multiplex immunoassay, a methodology that greatly resembles traditional ELISA, while permits simultaneous detection and identification of different antibodies in a single well. The reaction is processed in a 96 well microtiter plate, with six polystyrene beads embedded in each well of the plate. Four (4) different beads are coated with antigens for the detection of IgG antibodies to Measles, Mumps, Rubella and Varicella-Zoster virus in human serum. Two additional beads are included in each reaction well as filler beads. Specimen processing is fully automated on the Multiplier Analyzer.

    The Multiplier Analyzer adds the patient serum specimen and reagents to each well of the 96well plate, after which the mixture is incubated at 37°C with shaking. After a wash cycle, unbound antibodies from the patient's specimen are removed. Anti-human polyclonal IgG antibody conjugated to horseradish peroxidase (HRP) is added after which the mixture is incubated at 37°C with shaking. A second wash step removes excess conjuqate, then luminol substrate is added to each well. The amount of antibody captured by the antigen is determined by the chemiluminescence triggered by the attached HRP. Raw data is captured as light photons which are converted into relative light intensity units (RLU).

    The Multiplier software analyzes the image and generates a report that details the mean RLU signal for each target bead (MMRV) by test sample. In every assay a calibrator is run. The DYNEX SmartPLEX MMRV IgG Assay Kit is qualitative and produces a result defined as negative (NEG), equivocal (EQV) or positive (POS) for each target analyte. The result is calculated in the Multiplier software by dividing the test sample RLU values by the mean calibrator RLU value to produce an index value for each target.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study data for the DYNEX SmartPLEX MMRV IgG Assay Kit, as requested, based on the provided FDA 510(k) summary.

    Device Name: DYNEX SmartPLEX MMRV IgG Assay Kit

    Indications for Use: Qualitative detection of IgG antibodies to Measles, Mumps, Rubella, and Varicella-Zoster Virus (VZV) in human serum, as an aid in the determination of serological status. Intended for use with the DYNEX Multiplier Analyzer, in adults and pediatrics age above 1 year. Not intended for screening blood or plasma donors, neonates, pediatric patients below 1 year, or immunocompromised patients, or for point-of-care facilities.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria values (e.g., "PPA must be >X%"). Instead, it presents the performance results obtained from the study and implies that these results were deemed acceptable for clearance. For this table, I will use the reported Clinical Performance (Method Comparison) as the primary indicator of device performance, specifically the Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) values.

    Performance MetricCategory (Analyte)Acceptance Criteria (Implied)Reported Device Performance
    Clinical Performance (Method Comparison)
    Positive Percent Agreement (PPA)Measles IgG (Pediatric and Adult)High PPA to demonstrate positive agreement with comparator.87.00% (95% CI: 85.4 - 88.5%)
    Negative Percent Agreement (NPA)Measles IgG (Pediatric and Adult)High NPA to demonstrate negative agreement with comparator.98.70% (95% CI: 96.3 - 99.6%)
    Positive Percent Agreement (PPA)Mumps IgG (Pediatric and Adult)High PPA to demonstrate positive agreement with comparator.94.70% (95% CI: 93.6 - 95.7%)
    Negative Percent Agreement (NPA)Mumps IgG (Pediatric and Adult)High NPA to demonstrate negative agreement with comparator.78.90% (95% CI: 73.3 - 83.5%)
    Positive Percent Agreement (PPA)Rubella IgG (Pediatric and Adult)High PPA to demonstrate positive agreement with comparator.92.40% (95% CI: 91.0 - 93.5%)
    Negative Percent Agreement (NPA)Rubella IgG (Pediatric and Adult)High NPA to demonstrate negative agreement with comparator.99.50% (95% CI: 96.6 - 100%)
    Positive Percent Agreement (PPA)VZV IgG (Pediatric and Adult)High PPA to demonstrate positive agreement with comparator.96.70% (95% CI: 95.8 - 97.5%)
    Negative Percent Agreement (NPA)VZV IgG (Pediatric and Adult)High NPA to demonstrate negative agreement with comparator.88.00% (95% CI: 83.7 - 91.4%)
    Reproducibility
    Total %CV (Mean over all samples)Measles IgGLow %CV to demonstrate consistency.Max Total %CV: 10.3% (Sample 3)
    Total %CV (Mean over all samples)Mumps IgGLow %CV to demonstrate consistency.Max Total %CV: 11.1% (Sample 20)
    Total %CV (Mean over all samples)Rubella IgGLow %CV to demonstrate consistency.Max Total %CV: 9.0% (Sample 13)
    Total %CV (Mean over all samples)VZV IgGLow %CV to demonstrate consistency.Max Total %CV: 8.7% (Sample 13)
    Within-Laboratory Precision
    Total %CV (Mean over all samples)Measles IgGLow %CV to demonstrate consistency.Max Total %CV: 8.5% (Sample 21)
    Total %CV (Mean over all samples)Mumps IgGLow %CV to demonstrate consistency.Max Total %CV: 11.1% (Sample 20)
    Total %CV (Mean over all samples)Rubella IgGLow %CV to demonstrate consistency.Max Total %CV: 6.0% (Sample 13)
    Total %CV (Mean over all samples)VZV IgGLow %CV to demonstrate consistency.Max Total %CV: 7.8% (Sample 13)
    Potential Cross-Reactivity (Negative Agreement)
    Negative AgreementSpecific interfering substances (e.g., ANA, CMV, EBV) for each MeasurandHigh negative agreement to indicate no false positives.Generally 100% (e.g., 5/5, 6/6, 10/10). One exception: 2/3 for HSV2 on Mumps.
    Interfering Substances
    No InterferenceFor specified substances (Albumin, Bilirubin, Cholesterol, Hemoglobin, Triglyceride)No significant interference.No interference observed at maximum tested concentrations.
    Shelf Life
    Stability Period2-8°C storageStability for a defined period.18 months at 2-8°C. (Evaluated up to 25 months, stable for 19 months, assigned 18 months)

    Note: The acceptance criteria are "implied" as the document presents the results to demonstrate performance rather than explicitly stating pre-defined thresholds the device needed to meet for clearance.


    2. Sample Sizes and Data Provenance

    • Test Set Sample Size:
      • Clinical Performance (Method Comparison): N = 2512 retrospective human serum specimens.
        • Adults: N = 1676
        • Pregnant Women: N = 500
        • Pediatrics (age above 1 year): N = 336
      • Reproducibility and Within-Laboratory Precision: 22 serum samples, each tested 240 replicates.
      • Potential Cross-Reactivity: Variable N for each substance (e.g., ANA n=5, CMV n=6-8, EBV n=6-11).
    • Data Provenance: Retrospective human serum specimens obtained from commercial vendors. The method comparison testing was performed at two US laboratory testing sites.

    3. Number of Experts and Qualifications for Ground Truth

    • The ground truth for clinical performance (method comparison) was established by FDA-cleared comparator tests, not through expert human readers or adjudicators for each individual case result. The agreement was measured against the results of these established assays.
    • For specimens with equivocal results on the test device and comparator device, they were retested with two additional FDA-cleared methods.

    4. Adjudication Method for the Test Set

    • For equivocal results that remained equivocal after initial retesting with the comparator device, a "2/3 rule" was used to establish a consensus final comparator result. This means that if at least two out of the three comparator devices provided the same categorical result (Positive, Equivocal, or Negative), that result was taken as the consensus.
    • Any remaining equivocal results (where no 2/3 consensus was reached or the consensus was still equivocal) were "counted against the clinical performance" of the SmartPLEX MMRV IgG Assay (this is implied by the 3x3 analysis where equivocal results from the test device are presented in comparison to the "Final Comparator Result").

    5. MRMC Comparative Effectiveness Study

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This document describes the performance of an in vitro diagnostic (IVD) assay kit, which directly measures antibodies in serum. These types of devices do not typically involve human readers interpreting images or data to the extent that an MRMC study would be applicable. The performance is assessed by comparison to established laboratory methods or ground truth.

    6. Standalone Performance

    • Yes, standalone performance was done. The entire study is a standalone performance evaluation of the DYNEX SmartPLEX MMRV IgG Assay Kit in relation to comparator methods. The device's output (qualitative detection of IgG antibodies) is directly compared to the output of other FDA-cleared IVD assays. There is no "human-in-the-loop" component for this type of diagnostic assay, as its output is a direct measurement.

    7. Type of Ground Truth Used

    • The ground truth for the clinical performance study was primarily based on the results from one or more FDA-cleared comparator immunoassay devices. For ambiguous cases (equivocal results), a consensus derived from multiple FDA-cleared comparator methods using a "2/3 rule" was employed. This is a common method for establishing reference values in IVD studies where a perfect "gold standard" may not exist for all samples, or where the goal is to show substantial equivalence to established methods.

    8. Sample Size for the Training Set

    • The document does not specify a separate "training set" sample size or details about a training phase. For IVD assay kits, the development and optimization process (analogous to training) typically involves internal experimentation, formulation adjustments, and preliminary testing, rather than a distinct "training set" of patient samples in the same way an AI/ML algorithm would use labeled data. The provided data represents the validation/test set used for regulatory submission.

    9. How the Ground Truth for the Training Set was Established

    • Since a distinct "training set" as understood in AI/ML was not explicitly used or described in the context of this IVD assay kit, the concept of establishing ground truth for a training set is not directly applicable here. The focus is on the performance of the final, developed kit. The development process would have involved establishing specifications and ensuring the assay's ability to accurately detect the target antibodies, perhaps using characterized positive/negative panels, but this is not typically detailed as "ground truth for training" in 510(k) summaries for such devices.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 42