Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K151029
    Device Name
    Phadia Prime
    Manufacturer
    Date Cleared
    2016-01-19

    (277 days)

    Product Code
    Regulation Number
    866.5750
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ImmunoCAP Specific IgE is an in vitro quantitative assay for the measurement of allergen specific IgE in human serum or plasma (EDTA or Na-Heparin). ImmunoCAP Specific IgE is to be used with the instrument Phadia 250, Phadia 1000, Phadia 2500 and Phadia 5000. It is intended for in vitro diagnostic use as an aid in the clinical diagnosis of IgE mediated allergic disorders in conjunction with other clinical findings, and is to be used in clinical laboratories.

    Device Description

    ImmunoCAP Specific IgE assay measures IgE antibodies to specific allergens in human serum or plasma. Presence of specific IgE antibodies is a prerequisite for an IgE-mediated allergic reaction to occur. The assay allows quantitative measurements of IgE antibodies specific to a wide range of individual allergens and allergen components.

    Today, 367 different ImmunoCAP Allergens are cleared in the US for determinations of specific IgE antibodies.

    ImmunoCAP Specific IgE is to be used with the instruments Phadia 100, Phadia 1000, Phadia 2500 and Phadia 5000. It is intended for in vitro diagnostic use as an aid in the clinical diagnosis of IgE mediated allergic disorders in conjunction with other clinical findings, and is to be used in clinical laboratories.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the ImmunoCAP Specific IgE device:

    It is important to note that this document is a 510(k) Summary for a software update (Phadia Prime) to an existing device (ImmunoCAP Specific IgE assay), rather than a submission for an entirely new device or assay. The core assay described, "ImmunoCAP Specific IgE," has already been cleared under previous K numbers (K113841, K051218). Therefore, the "study" described herein primarily focuses on demonstrating that the new software component performs equivalently to the old software component in processing results from the already-cleared assay.


    Acceptance Criteria and Reported Device Performance

    The document describes a comparative test between the new software, Phadia Prime, and the predicate software, IDM. The core "acceptance criterion" for this software update is equivalence in output.

    Acceptance CriteriaReported Device Performance
    Phadia Prime software produces the same output (results, calculations, and statistics for dedicated in vitro diagnostic tests) as the predicate IDM software, when processing instrument raw values and different method settings (assays) from the ImmunoCAP Specific IgE assay. (Implicitly, the algorithms for handling requests, results, calculations, and statistics must be the same.)A comparative test was performed using a predetermined set of data with instrument raw values and different method settings (assays). The results show that Phadia Prime produces the same output as IDM irrespective of assay chosen. The document explicitly states: "The main functionalities of the respective software are the same algorithms are used in Phadia Prime for the handling of requests, results, calculations and statistics for the dedicated in vitro diagnostic tests, as in IDM. The implementation of the software has been fully verified and found acceptable."

    Additional Information as Requested:

    1. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

      • Sample Size: The document refers to "a predetermined set of data with instrument raw values and different method settings (assays)." It does not specify the exact number of data points, cases, or "method settings" used in this comparative test.
      • Data Provenance: Not specified. It's internal validation data, likely from Phadia's testing; no country of origin is mentioned, and it is not explicitly stated as retrospective or prospective patient data, but rather instrument raw values.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

      • This question is not applicable in the context of this software update. The "ground truth" for this comparative test isn't established by human experts interpreting clinical data. Instead, the "ground truth" for the software comparison is the output of the predicate device's software (IDM). The goal was to show the new software (Phadia Prime) matched the existing, cleared software's calculations, not to re-evaluate the clinical accuracy of the assay itself.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • None. Adjudication by multiple experts is not relevant here as the "ground truth" is the output of the predicate software. The comparison is a direct, algorithmic one.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study was not done. This device is an in vitro diagnostic (IVD) assay with an automated result processing system, not an AI-based image analysis or diagnostic aid for human readers. The software performs calculations and statistics; it does not "assist" a human reader in the interpretation of complex images or data in a way that would require an MRMC study.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, indirectly. The comparative test described is essentially a standalone (algorithm only) comparison. The Phadia Prime software (algorithm) was tested against the IDM software (algorithm) using instrument raw data. The study evaluated the output of the algorithm without human intervention in the result calculation process itself. However, it's important to remember this algorithm processes raw data from a laboratory test, not raw patient data or images directly requiring "human-in-the-loop performance" for initial interpretation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The ground truth for this software comparison was the calculated output of the predicate software (IDM). The objective was to confirm that the new software's calculations matched those of the previously cleared software.
    7. The sample size for the training set:

      • Not specified/Not Applicable. This document describes validation of a software update, not the development of a predictive model or AI that requires a "training set" in the conventional sense. The algorithms for calculations were already established and implemented in the predicate software; the new software's "training" would be more akin to software development and testing, rather than machine learning model training.
    8. How the ground truth for the training set was established:

      • Not Applicable. See point 7.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1