K Number
K222251
Date Cleared
2023-09-18

(418 days)

Product Code
Regulation Number
866.6010
Panel
IM
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

B·R·A·H·M·S™ CgA II KRYPTOR™ is an automated immunofluorescent assay using Time-Resolved Amplified Cryptate Emission (TRACE™) technology for quantitative determination of Chromogranin A concentration in human serum.

B·R·A·H·M·S™ CgA II KRYPTOR™ is to be used in conjunction with other clinical methods as an aid in monitoring of disease progression during the course of disease and treatment in patients with gastroentero-pancreatic neuroendocrine tumors (GEP-NETs, grade 1 and grade 2).

Device Description

The B-R-A-H-M-S CgA II KRYPTOR assay is based on the formation of a complex comprised of a Chromogranin A (CgA) analyte "sandwiched" between two monoclonal mouse anti-CgA antibodies. One of the antibodies (537/H2) is directed at the epitope AA124–144 and labelled with DiSMP cryptate, the other antibody (541/E2) binds to AA280-301 and is labelled with Alexa Fluor®647.

The measurement principle is based on a non-radiative energy transfer from a donor (cryptate) to an acceptor (Alexa Fluor™647) when they are part of an immunocomplex (TRACE technology (Time-Resolved Amplified Cryptate Emission)).

The fluorescent signal is proportional to the concentration of the analyte to be measured.

With this principle B-R-A-H-M-S CgA II KRYPTOR is a homogenous one-step immunoassay for the quantification of CgA II in human serum. The linear direct measuring range of the assay is from 20-3,000 ng/mL, going up to 1,000,000 ng/mL with automated dilution. Results can be retrieved after a 29 min incubation time.

AI/ML Overview

Here's an analysis of the acceptance criteria and study findings for the B.R.A.H.M.S CgA II KRYPTOR device, based on the provided FDA 510(k) summary:

Acceptance Criteria and Reported Device Performance

Note: The provided document primarily describes analytical performance criteria and clinical performance measures (sensitivity, specificity) rather than explicit "acceptance criteria" in a pass/fail format for clinical decision-making. However, the sensitivity and specificity values obtained from the clinical study serve as the reported device performance against which implicit clinical acceptance would be judged. The analytical performance metrics are generally presented as numerical results meeting industry standards (CLSI guidelines).

Acceptance Criteria CategorySpecific MetricAcceptance Threshold (Implicit/Standard)Reported Device Performance
Analytical PerformancePrecision (Repeatability CV)Generally low CVs for quantitative assays (e.g., 3,000 ng/mL, extending range up to 1,000,000 ng/mL.
InterferenceBias ≤ 10% for common endogenous and exogenous interfering substances.Substances evaluated were found not to affect test performance (bias ≤ 10%) at clinically relevant concentrations.
Cross-ReactivityLow cross-reactivity with structurally similar substances.Between -21.6% - 0.03% (for various CgA fragments and related proteins).
Clinical PerformanceClinical Sensitivity (for tumor progression based on ΔCgA > 50% & >100 ng/mL cutoff)Sufficient to aid monitoring, balancing with specificity given the intended use (aid, not standalone diagnosis).34.4% (95% CI: 23.2% - 45.5%)
Clinical Specificity (for tumor progression based on ΔCgA > 50% & >100 ng/mL cutoff)Sufficient to aid monitoring, balancing with sensitivity given the intended use (aid, not standalone diagnosis).93.4% (95% CI: 90.2% - 96.0%)
Positive Predictive Value (PPV)Relevant for clinical utility given prevalence.57.9% (95% CI: 40.5% - 73.6%)
Negative Predictive Value (NPV)Relevant for clinical utility given prevalence.84.3% (95% CI: 79.3% - 89.1%)

Study Details:

  1. Sample size used for the test set and the data provenance:

    • Clinical Study (for Sensitivity and Specificity): 153 adult GEP-NET patients (grade 1 and 2), with 459 total observations (likely reflecting multiple monitoring visits per patient). The study was described as a prospective study.
    • Clinical Cut-off Derivation: 102 patients with diagnosed well-differentiated G1 and G2 GEP-NETs. This was a retrospective, bicentric observational pilot study.
    • Reference Range Determination: 206 samples from self-declared healthy individuals. Data provenance is USA.
    • Analytical studies: Various sample sizes were used, often involving replicates of pooled or individual human serum samples. For example, LoQ used 420 total replicates from 7 different pools of human serum samples.
    • Provenance for analytical samples: Not explicitly stated but generally implied to be from human subjects, for instance, "human serum samples".
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    • For the clinical study, tumor progression was classified by RECIST 1.1 criteria. This implies that experts (typically radiologists or oncologists) were involved in interpreting imaging (CT/MRI) according to these established criteria to determine the ground truth for tumor progression.
    • The document does not specify the direct number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). However, RECIST 1.1 is an internationally recognized standard for evaluating cancer treatment response based on imaging, implying adjudication by qualified personnel.
  3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • The ground truth for tumor progression in the clinical studies was established using RECIST 1.1 criteria based on standard imaging (CT/MRI).
    • The document does not explicitly state an adjudication method like "2+1" or "3+1" for discordant interpretations if multiple readers were involved in RECIST assessment. However, RECIST guidelines themselves are designed to standardize interpretation, and clinical trials often employ independent central review or consensus panels for definitive RECIST ratings, though this specific detail is not provided here.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic (IVD) for quantitative determination of Chromogranin A concentration in human serum, intended to be used in conjunction with other clinical methods as an aid in monitoring. It is not an AI-assisted imaging device or a device that directly aids human readers in interpreting images.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • This is an IVD assay, which functions as a "standalone" measurement of a biomarker in serum. The results are generated by the automated instrument (B.R.A.H.M.S KRYPTOR compact PLUS analyzer) without direct human interpretation of the measurement itself. However, the device's output (CgA concentration) is explicitly stated to not be used for standalone diagnosis or monitoring but "in conjunction with other clinical methods." So while the analytical measurement is standalone, the clinical interpretation for decision-making is not.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the clinical performance evaluation (sensitivity and specificity for tumor progression), the ground truth was imaging-based tumor assessment using RECIST 1.1 criteria. This is a form of expert assessment based on a standardized methodology, often relying on radiologists and oncologists to interpret imaging studies.
  7. The sample size for the training set:

    • This document describes an IVD device submission, not a machine learning/AI device. Therefore, the concept of a "training set" for an algorithm in the typical AI sense does not directly apply. The development and validation of the assay itself would have involved numerous samples for optimization and establishment of analytical performance characteristics, but these are not referred to as a "training set" here.
  8. How the ground truth for the training set was established:

    • As addressed above, the concept of a "training set" in the context of machine learning/AI is largely inapplicable here. The development of the assay's analytical characteristics (e.g., linearity, precision, detection limits) would be established through standard laboratory practices and reference materials, for which "ground truth" is defined by known concentrations or established analytical methods.

§ 866.6010 Tumor-associated antigen immunological test system.

(a)
Identification. A tumor-associated antigen immunological test system is a device that consists of reagents used to qualitatively or quantitatively measure, by immunochemical techniques, tumor-associated antigens in serum, plasma, urine, or other body fluids. This device is intended as an aid in monitoring patients for disease progress or response to therapy or for the detection of recurrent or residual disease.(b)
Classification. Class II (special controls). Tumor markers must comply with the following special controls: (1) A guidance document entitled “Guidance Document for the Submission of Tumor Associated Antigen Premarket Notifications (510(k)s) to FDA,” and (2) voluntary assay performance standards issued by the National Committee on Clinical Laboratory Standards.