Search Results
Found 2 results
510(k) Data Aggregation
(130 days)
The AutoDELFIA Neonatal IRT kit is intended for the quantitative determination of human immunoreactive trypsin(ogen) (IRT) in blood specimens dried on filter paper as an aid in screening newborns for cystic fibrosis using the 1235 AutoDELFIA® automatic immunoassay system.
The AutoDELFIA Neonatal IRT assay is a solid phase, two-site fluoroimmunometric assay based on the direct sandwich technique in which two monoclonal antibodies (derived from mice) are directed against two separate antigenic determinants on the IRT molecule. Calibrators, controls and test specimens containing IRT are reacted simultaneously with immobilized monoclonal antibodies directed against a specific antigenic site on the IRT molecule and europium-labeled monoclonal antibodies (directed against a different antigenic site) in assay buffer. The assay buffer elutes IRT from the dried blood on filter paper disks. The complete assay requires only one incubation step. Enhancement Solution dissociates europium ions from the labeled antibody into solution where they form highly fluorescent chelates with components of the Enhancement Solution. The fluorescence in each well is then measured. The fluorescence of each sample is proportional to the concentration of IRT in the sample.
The provided text describes a 510(k) premarket notification for the AutoDELFIA Neonatal IRT kit. This submission focuses on demonstrating substantial equivalence to a predicate device, rather than providing a detailed study report with specific acceptance criteria and performance data in the format typically used for AI/ML device evaluations. As such, many of the requested fields for AI/ML device studies are not directly applicable or explicitly stated in this document.
However, I can extract information related to the device's analytical performance characteristics, which serve as a form of acceptance criteria for this type of in-vitro diagnostic device.
Here's an attempt to populate the table and answer the questions based on the provided text, indicating where information is not available.
1. Table of Acceptance Criteria and the Reported Device Performance
For this in-vitro diagnostic device, "acceptance criteria" are generally established by demonstrating performance characteristics that are comparable to or better than a legally marketed predicate device, and that meet the required analytical performance for its intended use.
| Characteristic (Feature) | Acceptance Criteria (from Predicate Device) | Reported Device Performance (New Device: B005-212/B005-204) |
|---|---|---|
| Measuring Range | 4 (as defined by LoB) to 500 (as defined by upper calibrator) ng/mL blood | 16 to 480 ng/mL blood |
| Linearity Range | No claims for linearity in labeling. | 16 to 480 ng/mL blood |
| Analytical Sensitivity / Limit of Blank (LoB) | < 4 ng/mL blood | 0.53 ng/mL blood |
| Limit of Detection (LoD) | Not explicitly stated, implied to be around 4 ng/mL blood (from LoB) | 2.9 ng/mL blood |
| Antibody Cross-Reactions | α2-macroglobulin < 4 ng/ml blood, α1-antitrypsin < 4 ng/ml blood, Phospholipase A2 < 4 ng/ml blood, Chymotrypsin < 4 ng/ml blood, Human IgG < 4 ng/ml blood, Uropepsinogen < 4 ng/ml blood | α2-macroglobulin < 4 ng/ml blood, α1-antitrypsin < 4 ng/ml blood, Phospholipase A2 < 4 ng/ml blood, Chymotrypsin < 4 ng/ml blood, Human IgG < 4 ng/ml blood, Uropepsinogen < 4 ng/ml blood (All "Same" as predicate, which explicitly lists these values) |
| Hook effect | No hook effect has been found with IRT concentrations up to 40,000 ng/mL | No hook effect has been found with IRT concentrations up to 40,000 ng/mL |
| Precision (Total Variation CV%) | 42.6 ng/mL blood CV% 9.3, 98.8 ng/mL blood CV% 10.0, 266 ng/mL blood CV% 9.6 | 16.7 ng/mL blood CV% 8.7, 22.5 ng/mL blood CV% 9.6, 48.0 ng/mL blood CV% 9.1, 104 ng/mL blood CV% 8.0, 247 ng/mL blood CV% 8.3, 401 ng/mL blood CV% 8.4, 449 ng/mL blood CV% 9.4 |
Note on "Acceptance Criteria": For this 510(k) submission, the "acceptance criteria" are implied to be achieving analytical performance characteristics that are comparable to or improved from the predicate device, thereby demonstrating substantial equivalence. The table shows that the new device generally performs comparably or better (e.g., lower LoB, explicit linearity claim, more detailed precision data, and a wider range of concentrations with good precision).
Regarding the study proving the device meets the acceptance criteria:
The document describes the submission as a 510(k) for an in-vitro diagnostic kit. The "study" here refers to the analytical performance evaluation conducted by the manufacturer to demonstrate substantial equivalence to the predicate device. The information provided is a summary of the device's analytical characteristics.
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated in terms of number of individual patient samples. The precision data lists several concentration levels (e.g., 16.7 ng/mL, 22.5 ng/mL, etc.), implying multiple measurements were taken at each level. The cross-reactivity and hook effect studies would have involved specific spiked samples.
- Data Provenance: Not explicitly stated (e.g., country of origin). It's an in-house analytical validation, likely conducted at the manufacturer's facility. It is a retrospective analysis of laboratory-prepared samples or collected blood spots rather than a prospective clinical study involving external patient recruitment.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This question is more applicable to AI/ML devices that rely on expert interpretation for ground truth. For this in-vitro diagnostic assay, the "ground truth" for reported values (e.g., IRT concentration) is established by the analytical method itself and calibration against known standards. There's no mention of external expert consensus for establishing ground truth for the analytical performance data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable for this type of analytical performance study of an in-vitro diagnostic kit. Adjudication methods like 2+1 or 3+1 are typically used in clinical studies where multiple human readers interpret medical images or clinical data, and a disagreement resolution process is needed to establish a definitive ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable. This is not an AI/ML device designed to assist human readers. It's an automated immunoassay system for quantitative measurement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, this is not an AI/ML algorithm. It is an automated immunoassay kit where the "algorithm" is the biochemical reaction and the instrument's measurement and calculation of IRT concentration. The device operates in a standalone analytical capacity to measure IRT.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the analytical performance characteristics (such as concentration, linearity, limit of blank, limit of detection, cross-reactivity, hook effect, and precision) would be established by:
- Reference materials/known standards: For calibration, linearity, and determining accurate concentrations.
- Spiked samples: For cross-reactivity and hook effect studies where known interferents or high concentrations are added.
- Repeated measurements: For precision studies.
8. The sample size for the training set:
- Not explicitly stated, and the concept of a "training set" as understood in AI/ML is not directly applicable. For this type of device, development involves optimizing the assay components and conditions, which is an iterative process using various samples (e.g., patient samples, spiked samples, controls) but not typically referred to as a discrete "training set" in the AI/ML context.
9. How the ground truth for the training set was established:
- As above, the concept of a "training set" with established ground truth in the AI/ML sense is not relevant here. Ground truth for internal development and optimization would be based on the known biochemical properties of the reagents, reference standards, and performance evaluation criteria.
Ask a specific question about this device
(76 days)
Ask a specific question about this device
Page 1 of 1