Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K991804
    Date Cleared
    2000-05-05

    (345 days)

    Product Code
    Regulation Number
    866.5660
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For the in-vitro qualitative screening IgG, M autoantibodies against B2-glycoprotein 1 (B2GP1) present in A, This kit may be used in conjunction with human serum. assays and clinical information to aid the anticardiolipin anerouralis of thrombosis in at risk patients having, for example, antiphospholipid syndrome (APS) or systemic lupus erythematosus (SLE) .

    Device Description

    Bindazyme® Anti-B2 GP1 Screen EIA Test Kit

    AI/ML Overview

    This is an FDA Premarket Notification (510(k)) K991804 for a diagnostic test kit, not a device that would typically have the kind of performance criteria and study structure you're asking about (e.g., AI algorithms, MRMC studies, specific effect sizes for human reader improvement).

    The document describes the Bindazyme® Anti-B2 GP1 Screen EIA Test Kit, an in-vitro diagnostic device for the qualitative screening of IgG and IgM autoantibodies against B2-glycoprotein 1 (B2GP1) in human serum. This type of device's "performance" is typically evaluated through clinical sensitivity, specificity, accuracy, precision, and agreement with predicate devices or reference methods.

    Therefore, many of the specific questions you've asked, such as "effect size of how much human readers improve with AI vs without AI assistance," "adjudication method," and "number of experts used to establish the ground truth for the test set," are not applicable to this type of medical device submission.

    However, I can extract the relevant information that is present or infer what would typically be part of such a submission for an in vitro diagnostic kit.

    Here's an attempt to answer based on the provided document and typical expectations for an IVD kit:

    1. A table of acceptance criteria and the reported device performance

    The provided document does not explicitly list acceptance criteria or detailed reported device performance in this summary letter. For EIA kits, typical acceptance criteria and performance metrics would include:

    Acceptance Criteria (Typical for EIA Kit)Reported Device Performance (Not explicitly in this document, but inferred)
    Clinical Sensitivity: Ability to correctly identify positive samples.(Would be provided in a separate performance study section of the 510(k), comparing results to a gold standard or predicate device. Typically reported as a percentage.)
    Clinical Specificity: Ability to correctly identify negative samples.(Would be provided in a separate performance study section of the 510(k). Typically reported as a percentage.)
    Accuracy/Agreement: Overall agreement with a reference method or predicate device.(Typically reported as overall percent agreement, positive percent agreement, and negative percent agreement.)
    Precision/Reproducibility: Consistency of results when tested multiple times.(Would include within-run, between-run, and between-day precision, often expressed as %CV.)
    Linearity/Assay Range: The range over which the assay gives accurate results.(Not directly applicable for a 'screening' qualitative kit, but quantitative versions would have this.)
    Interference: Lack of significant impact from common interfering substances.(Would be tested with various endogenous and exogenous substances; results would confirm no significant interference within specified limits.)
    Cross-reactivity: No significant reactivity with related or potentially interfering antibodies/substances.(Specific antibodies or substances would be tested to ensure no false positives.)
    Shelf-life stability: Device maintains performance over its stated shelf-life.(Real-time or accelerated stability studies would support the expiration date.)
    Substantial Equivalence: Demonstrated equivalence to a legally marketed predicate device.This is the primary acceptance criterion for 510(k) approval. The FDA letter confirms that the device is substantially equivalent to legally marketed predicate devices.

    2. Sample size used for the test set and the data provenance

    The provided document does not specify the sample size for the test set or the data provenance. For an IVD kit, performance studies would typically involve:

    • Sample Size: Varies but would be sufficient to achieve statistically significant results, often hundreds of samples (e.g., 200-500 or more) including both positive and negative cases from various patient populations.
    • Data Provenance: Could be prospective (newly collected samples) or retrospective (archived samples), and would typically specify the disease prevalence represented, as well as characteristics of the patient population (e.g., patients suspected of APS/SLE, healthy controls, other autoimmune conditions). Country of origin is usually not explicitly stated in the summary, but data would be from clinical sites.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This question is not applicable in the context of an in vitro diagnostic test kit that measures a biomarker (antibodies). Ground truth for such a device is typically established by:

    • Reference Methods: Comparison against an established "gold standard" laboratory method (e.g., another FDA-cleared or well-validated EIA kit, IFA, Western Blot, or a quantitative immunoassay).
    • Clinical Diagnosis: Clinical diagnosis of a condition (e.g., Antiphospholipid Syndrome (APS) or Systemic Lupus Erythematosus (SLE)) as determined by a physician (e.g., rheumatologist) using established diagnostic criteria, often supported by other laboratory tests and clinical findings.

    4. Adjudication method for the test set

    This is not applicable in the traditional sense for an IVD kit's performance study regarding ground truth. If comparison against a predicate device led to discordant results, those discordant samples might undergo further testing with a tie-breaker method or a more definitive reference method.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. MRMC studies and "human readers improve with AI" are concepts relevant to AI-powered image analysis or diagnostic support systems, not a chemical-based in vitro immunoassay test kit. This device is read objectively (e.g., spectrophotometrically) and does not involve human interpretation of complex images or data that AI would assist with.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This device is a manual or semi-automated immunoassay kit; there is no embedded algorithm in the sense of AI. The "algorithm" is the biochemical reaction and the reading of optical density according to the manufacturer's instructions. Its performance is inherent in the kit's design and reagents.

    7. The type of ground truth used

    For an in-vitro diagnostic test like the Bindazyme® Anti-B2 GP1 Screen EIA Test Kit, the ground truth would most likely be based on:

    • Clinical Diagnosis: Diagnosis of Antiphospholipid Syndrome (APS) or Systemic Lupus Erythematosus (SLE) according to established clinical criteria (e.g., revised Sapporo criteria for APS, ACR/SLICC criteria for SLE), often supported by other laboratory findings and clinical presentation.
    • Reference Method: Results from a previously cleared or well-established laboratory method for detecting anti-B2GP1 antibodies or associated conditions.
    • Expert Consensus (indirectly): The diagnostic criteria themselves are often products of expert consensus; however, direct "expert consensus" on this specific kit's results is not how ground truth is typically established for biomarker assays.

    8. The sample size for the training set

    The document does not specify a training set size. For IVD kits, "training set" is usually not discussed in the context of machine learning. Instead, it would refer to:

    • Assay Development and Optimization: Samples used during the development phase to fine-tune reagent concentrations, incubation times, cut-off values, etc. This is an iterative process and not typically reported with a fixed "sample size" like a clinical validation study.
    • Cut-off Determination: A separate set of well-characterized samples (e.g., healthy controls and known positives) would be used to establish the assay's cut-off value using statistical methods (e.g., ROC analysis, percentile calculation).

    9. How the ground truth for the training set was established

    Similar to point 8, the concept of a "training set" with ground truth in the AI sense is not applicable. For the development and optimization samples, ground truth would be established through:

    • Well-characterized samples: Use of samples from patients with confirmed diagnosis of APS/SLE (based on clinical criteria and other serological markers) and samples from healthy individuals or those with other autoimmune diseases where anti-B2GP1 is expected to be absent.
    • Comparison to existing methods: Performance against established laboratory methods during the development process.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1