Search Results
Found 2 results
510(k) Data Aggregation
(418 days)
CFR 862.1660 Quality Control material (assayed and unassayed)
- 21 CFR 862.1150 Calibrator
- 21 CFR 866.4520
B·R·A·H·M·S™ CgA II KRYPTOR™ is an automated immunofluorescent assay using Time-Resolved Amplified Cryptate Emission (TRACE™) technology for quantitative determination of Chromogranin A concentration in human serum.
B·R·A·H·M·S™ CgA II KRYPTOR™ is to be used in conjunction with other clinical methods as an aid in monitoring of disease progression during the course of disease and treatment in patients with gastroentero-pancreatic neuroendocrine tumors (GEP-NETs, grade 1 and grade 2).
The B-R-A-H-M-S CgA II KRYPTOR assay is based on the formation of a complex comprised of a Chromogranin A (CgA) analyte "sandwiched" between two monoclonal mouse anti-CgA antibodies. One of the antibodies (537/H2) is directed at the epitope AA124–144 and labelled with DiSMP cryptate, the other antibody (541/E2) binds to AA280-301 and is labelled with Alexa Fluor®647.
The measurement principle is based on a non-radiative energy transfer from a donor (cryptate) to an acceptor (Alexa Fluor™647) when they are part of an immunocomplex (TRACE technology (Time-Resolved Amplified Cryptate Emission)).
The fluorescent signal is proportional to the concentration of the analyte to be measured.
With this principle B-R-A-H-M-S CgA II KRYPTOR is a homogenous one-step immunoassay for the quantification of CgA II in human serum. The linear direct measuring range of the assay is from 20-3,000 ng/mL, going up to 1,000,000 ng/mL with automated dilution. Results can be retrieved after a 29 min incubation time.
Here's an analysis of the acceptance criteria and study findings for the B.R.A.H.M.S CgA II KRYPTOR device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Note: The provided document primarily describes analytical performance criteria and clinical performance measures (sensitivity, specificity) rather than explicit "acceptance criteria" in a pass/fail format for clinical decision-making. However, the sensitivity and specificity values obtained from the clinical study serve as the reported device performance against which implicit clinical acceptance would be judged. The analytical performance metrics are generally presented as numerical results meeting industry standards (CLSI guidelines).
| Acceptance Criteria Category | Specific Metric | Acceptance Threshold (Implicit/Standard) | Reported Device Performance |
|---|---|---|---|
| Analytical Performance | Precision (Repeatability CV) | Generally low CVs for quantitative assays (e.g., <10% for low concentrations, <5% for higher) | Low range (23.0 ng/mL): 5.2% High range (2687 ng/mL): 1.6% |
| Precision (Within-Laboratory CV) | Generally low CVs for quantitative assays (e.g., <10% for low concentrations, <5% for higher) | Low range (23.0 ng/mL): 10.0% High range (2687 ng/mL): 7.4% | |
| Precision (Lot-to-Lot CV) | Generally low CVs (e.g., <5%) | Low range (26.3 ng/mL): 1.2% High range (2895 ng/mL): 0.0% (Note: Some 0.0% values may indicate very low variability or rounding) | |
| Precision (Reproducibility CV) | Generally low CVs (e.g., <10% for low concentrations, <5% for higher) | Low range (25.7 ng/mL): 9.0% High range (92,561 ng/mL): 5.6% | |
| Limit of Blank (LoB) | As low as technically feasible, ensuring differentiation from zero. | 11.3 ng/mL | |
| Limit of Detection (LoD) | As low as technically feasible, ensuring reliable detection of low concentrations. | 14.0 ng/mL | |
| Limit of Quantitation (LoQ) | Lowest concentration with a within-laboratory precision CV of ≤ 20%. | 20.0 ng/mL (met ≤ 20% CV) | |
| Linearity Range | Demonstrated across the claimed measuring range. | 20.0 ng/mL (LoQ) up to 1,000,000 ng/mL (with dilution) | |
| Dilution Recovery | Typically recovery within 85-115% of expected values. | Mean recovery values ranging from 97.6% - 109.6% | |
| Spike Recovery | Typically recovery within 90-110% of expected values. | Individual recovery values ranging from 91% - 109% | |
| High Dose Hook Effect | Absent or managed by automatic detection/dilution. | Detected by kinetics analysis; automatic dilution for samples > 3,000 ng/mL, extending range up to 1,000,000 ng/mL. | |
| Interference | Bias ≤ 10% for common endogenous and exogenous interfering substances. | Substances evaluated were found not to affect test performance (bias ≤ 10%) at clinically relevant concentrations. | |
| Cross-Reactivity | Low cross-reactivity with structurally similar substances. | Between -21.6% - 0.03% (for various CgA fragments and related proteins). | |
| Clinical Performance | Clinical Sensitivity (for tumor progression based on ΔCgA > 50% & >100 ng/mL cutoff) | Sufficient to aid monitoring, balancing with specificity given the intended use (aid, not standalone diagnosis). | 34.4% (95% CI: 23.2% - 45.5%) |
| Clinical Specificity (for tumor progression based on ΔCgA > 50% & >100 ng/mL cutoff) | Sufficient to aid monitoring, balancing with sensitivity given the intended use (aid, not standalone diagnosis). | 93.4% (95% CI: 90.2% - 96.0%) | |
| Positive Predictive Value (PPV) | Relevant for clinical utility given prevalence. | 57.9% (95% CI: 40.5% - 73.6%) | |
| Negative Predictive Value (NPV) | Relevant for clinical utility given prevalence. | 84.3% (95% CI: 79.3% - 89.1%) |
Study Details:
-
Sample size used for the test set and the data provenance:
- Clinical Study (for Sensitivity and Specificity): 153 adult GEP-NET patients (grade 1 and 2), with 459 total observations (likely reflecting multiple monitoring visits per patient). The study was described as a prospective study.
- Clinical Cut-off Derivation: 102 patients with diagnosed well-differentiated G1 and G2 GEP-NETs. This was a retrospective, bicentric observational pilot study.
- Reference Range Determination: 206 samples from self-declared healthy individuals. Data provenance is USA.
- Analytical studies: Various sample sizes were used, often involving replicates of pooled or individual human serum samples. For example, LoQ used 420 total replicates from 7 different pools of human serum samples.
- Provenance for analytical samples: Not explicitly stated but generally implied to be from human subjects, for instance, "human serum samples".
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- For the clinical study, tumor progression was classified by RECIST 1.1 criteria. This implies that experts (typically radiologists or oncologists) were involved in interpreting imaging (CT/MRI) according to these established criteria to determine the ground truth for tumor progression.
- The document does not specify the direct number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). However, RECIST 1.1 is an internationally recognized standard for evaluating cancer treatment response based on imaging, implying adjudication by qualified personnel.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The ground truth for tumor progression in the clinical studies was established using RECIST 1.1 criteria based on standard imaging (CT/MRI).
- The document does not explicitly state an adjudication method like "2+1" or "3+1" for discordant interpretations if multiple readers were involved in RECIST assessment. However, RECIST guidelines themselves are designed to standardize interpretation, and clinical trials often employ independent central review or consensus panels for definitive RECIST ratings, though this specific detail is not provided here.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic (IVD) for quantitative determination of Chromogranin A concentration in human serum, intended to be used in conjunction with other clinical methods as an aid in monitoring. It is not an AI-assisted imaging device or a device that directly aids human readers in interpreting images.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- This is an IVD assay, which functions as a "standalone" measurement of a biomarker in serum. The results are generated by the automated instrument (B.R.A.H.M.S KRYPTOR compact PLUS analyzer) without direct human interpretation of the measurement itself. However, the device's output (CgA concentration) is explicitly stated to not be used for standalone diagnosis or monitoring but "in conjunction with other clinical methods." So while the analytical measurement is standalone, the clinical interpretation for decision-making is not.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the clinical performance evaluation (sensitivity and specificity for tumor progression), the ground truth was imaging-based tumor assessment using RECIST 1.1 criteria. This is a form of expert assessment based on a standardized methodology, often relying on radiologists and oncologists to interpret imaging studies.
-
The sample size for the training set:
- This document describes an IVD device submission, not a machine learning/AI device. Therefore, the concept of a "training set" for an algorithm in the typical AI sense does not directly apply. The development and validation of the assay itself would have involved numerous samples for optimization and establishment of analytical performance characteristics, but these are not referred to as a "training set" here.
-
How the ground truth for the training set was established:
- As addressed above, the concept of a "training set" in the context of machine learning/AI is largely inapplicable here. The development of the assay's analytical characteristics (e.g., linearity, precision, detection limits) would be established through standard laboratory practices and reference materials, for which "ground truth" is defined by known concentrations or established analytical methods.
Ask a specific question about this device
(71 days)
|
| Classification Name: | Immunofluorometer equipment(per 21 CFR §866.4520
In combination with approved microplate tests (such as the Bartels manufactured, Zymmune™ CD4/CD8 Cell monitoring assay kit) the Zymmune™ Auto-Reader F [with software] is intended for use for in vitro quantification and monitoring of T-cell levels.
The Zymmune™ Auto-Reader F is a microplate reader that measures relative fluorescence signals from samples in a 96-well microplate. The Auto-Reader F is designed to be used with integrated software which collects and reports the measurements made by the microplate reader.
The provided text describes the Zymmune™ Auto-Reader F, a fluorescence microplate reader designed to quantify and monitor T-cell levels. Here's an analysis of its acceptance criteria and the study performed:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a quantitative format. However, the study aims to demonstrate "excellent correlation" between the Zymmune™ Auto-Reader F (with software) and the predicate device (Fluoroskan with manual calculations). The reported device performance is presented as:
| (n=70) | Slope | Intercept | Correlation Coefficient | Acceptance Criteria (Implicit) |
|---|---|---|---|---|
| CD4 T-lymphocyte | 1.018 | 3.6 | 0.994 | Strong correlation (e.g., R > 0.9) and slopes near 1, intercepts near 0 |
| CD8 T-lymphocyte | 1.035 | -3.8 | 0.992 | Strong correlation (e.g., R > 0.9) and slopes near 1, intercepts near 0 |
| CD4/CD8 ratio | 0.982 | 0 | 0.989 | Strong correlation (e.g., R > 0.9) and slopes near 1, intercepts near 0 |
The "Implicit Acceptance Criteria" are inferred from the conclusion stating "Analysis showed excellent correlation" and the high R-values and slopes close to 1 with intercepts close to 0, which are standard indicators of strong agreement in correlation studies.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: 70 clinical samples.
- Data Provenance: The document does not specify the country of origin. It indicates "clinical samples," suggesting they were obtained from patient populations, but does not state whether they were collected retrospectively or prospectively.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The ground truth for the test set is established by the predicate device (Fluoroskan) with manual calculations using an FDA-reviewed assay kit, not by human experts adjudicating results.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
There was no adjudication method described for the test set in the traditional sense of human experts reviewing cases. The comparison was device-to-device, with the predicate device's output (processed manually) serving as the reference.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This study focuses on the performance of a device (Autoreader-F with integrated software) compared to another device (Fluoroskan with manual calculations), not on the improvement of human readers with AI assistance. The Zymmune™ Auto-Reader F integrates software, but the study design is not an MRMC study assessing human performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the Zymmune™ Auto-Reader F (which includes an integrated software element that utilizes algorithms) was done. The device directly produces CD4 and CD8 T-lymphocyte counts and a CD4/CD8 ratio from the fluorescent signals without human intervention in the calculation process from signal to result. This standalone performance was then compared to the predicate device's results.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth for this correlation study was established by the predicate device (Flow Laboratories Fluoroskan Reader) combined with manual calculations using the algorithms provided in the FDA-reviewed Zymmune™ CD4/CD8 Cell Monitoring Kit. This can be considered a "reference method" ground truth, where another established and validated method is used as the standard.
8. The sample size for the training set
The document does not specify a separate "training set" size. The 70 clinical samples are described as being used to "validate the performance of the Autoreader-F" which implies it's a test set for performance comparison. It's possible that the algorithms within the integrated software were developed/trained previously, but the document does not provide details on that process or sample size.
9. How the ground truth for the training set was established
As no specific training set is detailed, information on how its ground truth was established is not provided in this document. The integrated software utilizes "algorithms appearing in the Zymmune™ CD4/CD8 Monitoring kit," suggesting these algorithms were previously developed and validated, likely with their own ground truth established during the development of that kit (K933878).
Ask a specific question about this device
Page 1 of 1