Search Results
Found 2 results
510(k) Data Aggregation
(30 days)
Abbott ARCHITECT Total T4 Calibrators
The ARCHITECT Total T4 Calibrators are for the calibration of the ARCHITECT i System when used for the quantitative determination of thyroxine (Total T4) in human serum and plasma.
The calibrators are devices intended for medical purposes for use in the ARCHITECT Total T4 assay test system to establish points of reference that are used in the quantitative determination of values in the measurement of substances in human specimens. Total T4 measurements are used as an aid in the assessment of thyroid status. The calibrators are designed to be used on the ARCHITECT i System (i 2000, and i 10005R) with the ARCHITECT Total T4 Reagents. ARCHITECT Total T4 Calibrator kit contains: Calibrator A (Cal A) 1 × 4 mL, Calibrator B (Cal B) 1 × 4 mL, Calibrator C (Cal C) 1 × 4 mL, Calibrator D (Cal D) 1 × 4 mL, Calibrator E (Cal E) 1 × 4 mL, Calibrator F (Cal F) 1 × 4 mL. ARCHITECT Total T4 Calibrator A contains human serum. Preservative: Sodium Azide. ARCHITECT Total T4 Calibrators B-F contain different concentrations of T4 prepared in human serum. Preservative: Sodium Azide. The ARCHITECT Total T4 Calibrators are prepared to target the following concentrations: Calibrator A (Cal A) 0.0 ug/dL (0.0 nmol/L), Calibrator B (Cal B) 3.0 ug/dL (38.6 nmol/L), Calibrator C (Cal C) 6.0 ug/dL (77.2 nmol/L), Calibrator D (Cal D) 12.0 ug/dL (154.4 nmol/L), Calibrator E (Cal E) 18.0 ug/dL (231.7 nmol/L), Calibrator F (Cal F) 24.0 ug/dL (308.9 nmol/L).
The provided document is a 510(k) premarket notification decision letter and summary for the ABBOTT LABORATORIES ARCHITECT Total T4 Calibrators. This document pertains to in-vitro diagnostic calibrators, which are used to establish reference points for quantitative determination of substances in human specimens. It is not about an AI/ML medical device, an imaging device, or a device that would typically involve human readers, ground truth consensus by experts, or MRMC studies in the way an AI diagnostic algorithm would.
Therefore, many of the questions asked, such as those related to AI/ML device performance, human readers, ground truth for imaging, and MRMC studies, are not applicable to this type of medical device.
However, I can extract information relevant to the acceptance criteria and study proving the device meets those criteria, specifically concerning its stability.
Here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance:
The document describes stability studies for the calibrators. The "acceptance criteria" for these stability studies are referred to as "stability limit evaluation criteria." While the exact numerical criteria are not detailed (e.g., specific percentage deviation allowed), the document states that the results met these criteria.
Acceptance Criteria (High-Level) | Reported Device Performance |
---|---|
In-Use Condition (Open Vial) Stability: Results must meet the "stability limit evaluation criteria" at specified time points (1, 2, 3, 4, 6, 9, 12 months). | Results support a 12-month stability claim at 2-8°C for in-use conditions. |
Intended Storage Condition (Closed Vial) Stability: Results must meet the "stability limit evaluation criteria" at specified time points (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 months). | Results support a 14-month stability claim at 2-8°C for closed-vial conditions. |
Value Assignment: Observed values must meet the manufacturer's pre-determined acceptance criteria for each new lot. | Stated that before each lot is released, "the observed values must meet the manufacture's acceptance criteria." |
2. Sample size used for the test set and the data provenance:
- In-Use Stability Study: "In-use testing was performed using a minimum of 10 replicates each of the on-test calibrators, reference controls, and reference panel."
- Closed Vial Stability Study: "three lots of test materials are stored at 2 – 8°C."
- Value Assignment (Pre-release testing): "3 instruments, 1 run per instrument, 15 replicates per run."
- Data Provenance: Not explicitly stated (e.g., country of origin). The studies appear to be laboratory-based validation studies. They are prospective for the purpose of validating the new (6-point) calibrators.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
Not applicable. This is an in-vitro diagnostic calibrator, not an imaging or AI diagnostic device requiring expert consensus for ground truth. The "ground truth" (or reference values) for calibrators are established through physical/chemical methods (gravimetric methods, USP reference L-Thyroxine) and internal Abbott reference standards.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. This is not an AI-assisted device for human reader improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not applicable. This is a physical calibrator kit, not an algorithm. The performance being evaluated is the stability and accuracy of the chemical solutions in the calibrators themselves, which are used to calibrate a lab instrument.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The ground truth for the calibrator values themselves is established through:
- Abbott internal reference standards.
- Gravimetric methods using USP reference L-Thyroxine.
For the stability studies, the "ground truth" are the baseline measurements and the "stability limit evaluation criteria" against which subsequent measurements are compared.
8. The sample size for the training set:
Not applicable. This is not a machine learning device that requires a training set.
9. How the ground truth for the training set was established:
Not applicable.
Ask a specific question about this device
(57 days)
ABBOTT ARCHITECT TOTAL T4
The ARCHITECT™ Total T (TT ) assay is a Chemiluminescent Microparticle Immunoassay (CMIA) for the quantitative determination of thyroxine (total T.) in human serum and plasma. The ARCHITECT Total T assay is to be used as an aid in the assessment of thyroid status.
ARCHITECT Total T, is a Chemiluminescent Microparticle Immunoassay (CMIA) for the quantitative determination of total T , in human serum or plasma (lithium heparin, sodium heparin. or potassium EDTA). ARCHITECT Total T, is calibrated with ARCHITECT Total T, Calibrators. ARCHITECT Total T, Controls are assayed for the verification of the accuracy and precision of the Abbott ARCHITECT i System.
Here's an analysis of the provided text regarding the Abbott ARCHITECT™ Total T device, focusing on acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Correlation Coefficient (Least Squares) | 0.928 |
Slope (Least Squares) | 1.01 |
Y-axis Intercept (Least Squares) | -0.11 ug/dL |
Correlation Coefficient (Passing-Bablok) | 0.928 |
Slope (Passing-Bablok) | 0.99 |
Y-axis Intercept (Passing-Bablok) | -0.13 ug/dL |
Note: The document only states the reported performance metrics. It does not explicitly define pre-specified acceptance criteria (e.g., "correlation coefficient > 0.95"). However, the statement "In conclusion, these data demonstrate that the ARCHITECT Total T2 assay is as safe and effective as, and is substantially equivalent to, the AxSYM Total Ta assay" implies that these performance metrics met the internal or regulatory standards for demonstrating substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 1155 specimens
- Data Provenance: The document does not specify the country of origin of the data. It also does not explicitly state whether the study was retrospective or prospective. Given the nature of a comparison study using existing specimens or collected for this purpose, it could be either, but retrospective analysis of banked samples is common for method comparisons.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The study is a method comparison between two assays, not a diagnostic accuracy study against a clinical ground truth established by experts. The "ground truth" in this context is the result from the predicate device (AxSYM® Total T. assay).
4. Adjudication Method for the Test Set
This information is not applicable/not provided. Adjudication typically applies to studies where multiple experts independently review data and their interpretations are then reconciled. This study is an analytical comparison of an investigational device against a predicate device.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not applicable. This is an analytical device comparison study, not an AI-assisted diagnostic study involving human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This study is a standalone performance assessment of the ARCHITECT™ Total T device against a predicate device. There is no human-in-the-loop component mentioned; it is a direct comparison of the quantitative results from the two assays.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for this study was the results obtained from the legally marketed predicate device, the AxSYM® Total T. assay. The new device's performance was compared directly to the predicate device's output on the same specimens.
8. The Sample Size for the Training Set
This information is not provided and is likely not relevant as this is an immunoassay device, not a machine learning or AI algorithm that typically requires a distinct training set. The device is "calibrated with ARCHITECT Total T, Calibrators," which serve a similar function in establishing the measurement curve, but this isn't typically referred to as a "training set" in the context of AI/ML.
9. How the Ground Truth for the Training Set Was Established
This information is not provided and is not applicable in the same way it would be for an AI/ML device. The "ground truth" for calibration in an immunoassay refers to the assigned values of the calibrator materials, which are typically established through a rigorous process of reference methods and standardization.
Ask a specific question about this device
Page 1 of 1