Search Results
Found 1 results
510(k) Data Aggregation
(120 days)
Dri-STAT® Reagent ACP is intended for use in the in vitro diagnostic determination of total acid phosphatase and non-prostatic acid phosphatase in human serum as a User Defined Reagent (UDR) application on SYNCHRON® Systems.
The Dri-STAT® Reagent ACP may be used on the family of Synchron Systems. The reagent kit contains 20 reagent bottles that needs to be manually transferred into a Beckman Coulter User-Define Cartridge. Also with 1 bottle of Acetate Buffer. The reagent kit contains a bottle of Acetate Buffer along with a sample treatment.
Here's a breakdown of the acceptance criteria and study information for the Dri-STAT® ACP Reagent, based on the provided text:
Acceptance Criteria and Device Performance
The provided document does not explicitly state pre-defined acceptance criteria in the typical sense of threshold values (e.g., "r-value > 0.95"). Instead, it presents performance data for method comparison and imprecision. The implicit acceptance criterion is that the performance of the candidate device (Dri-STAT® ACP Reagent on Synchron Systems) is substantially equivalent to the predicate device (Dri-STAT® ACP Reagent on Cobas Fara). Substantial equivalence is demonstrated through the presented performance data.
| Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Dri-STAT® ACP Reagent on Synchron Systems) |
|---|---|---|
| Method Comparison | Substantially equivalent to predicate device (Dri-STAT® ACP on Cobas Fara) | |
| TACP (Synchron LX) | High correlation (R) and reasonable slope/intercept | Slope: 1.093, Intercept: 0.143, R: 0.994 (n=94) |
| NPAP (Synchron LX) | High correlation (R) and reasonable slope/intercept | Slope: 1.066, Intercept: -0.197, R: 0.979 (n=47) |
| TACP (Synchron CX) | High correlation (R) and reasonable slope/intercept | Slope: 1.075, Intercept: 0.460, R: 0.997 (n=94) |
| NPAP (Synchron CX) | High correlation (R) and reasonable slope/intercept | Slope: 1.088, Intercept: -0.172, R: 0.994 (n=47) |
| Imprecision (TACP) | Low coefficient of variation (%C.V.) for controls and human pool | |
| Within-Run (Control 1) | Not specified explicitly, but generally <5-10% for clinical assays | 4.72% C.V. (Mean: 3.81 U/L, SD: 0.18 U/L, N: 80) |
| Within-Run (Control 2) | Not specified explicitly | 1.28% C.V. (Mean: 20.6 U/L, SD: 0.26 U/L, N: 80) |
| Within-Run (Control 3) | Not specified explicitly | 1.35% C.V. (Mean: 37.0 U/L, SD: 0.50 U/L, N: 80) |
| Within-Run (Human Pool) | Not specified explicitly | 1.63% C.V. (Mean: 35.5 U/L, SD: 0.58 U/L, N: 80) |
| Total (Control 1) | Not specified explicitly | 4.99% C.V. (Mean: 3.81 U/L, SD: 0.19 U/L, N: 80) |
| Total (Control 2) | Not specified explicitly | 1.70% C.V. (Mean: 20.6 U/L, SD: 0.35 U/L, N: 80) |
| Total (Control 3) | Not specified explicitly | 1.73% C.V. (Mean: 37.0 U/L, SD: 0.64 U/L, N: 80) |
| Total (Human Pool) | Not specified explicitly | 2.56% C.V. (Mean: 35.5 U/L, SD: 0.91 U/L, N: 80) |
| Imprecision (NPAP) | Low coefficient of variation (%C.V.) for controls and human pool | |
| Within-Run (Control 1) | Not specified explicitly, but generally <5-10% for clinical assays | 8.08% C.V. (Mean: 2.60 U/L, SD: 0.21 U/L, N: 80) |
| Within-Run (Human Pool) | Not specified explicitly | 8.11% C.V. (Mean: 2.96 U/L, SD: 0.24 U/L, N: 80) |
| Total (Control 1) | Not specified explicitly | 8.08% C.V. (Mean: 2.60 U/L, SD: 0.21 U/L, N: 80) |
| Total (Human Pool) | Not specified explicitly | 9.80% C.V. (Mean: 2.96 U/L, SD: 0.29 U/L, N: 80) |
Study Details
The provided document describes a study that aims to demonstrate substantial equivalence between the Dri-STAT® ACP Reagent on Synchron Systems (candidate device) and the Dri-STAT® ACP Reagent on Cobas Fara (predicate device).
-
Sample size used for the test set and the data provenance:
- Method Comparison Test Set:
- TACP (Total Acid Phosphatase): 94 serum samples.
- NPAP (Non-Prostatic Acid Phosphatase): 47 serum samples.
- Imprecision Test Set: 80 measurements for each control level and human pool for both TACP and NPAP.
- Data Provenance: Not explicitly stated, but clinical laboratory samples are typically collected in the country where the studies are performed (presumably USA given the manufacturer's location and FDA submission). The studies are retrospective or concurrent analyses of specimens, as it involves method comparison and imprecision testing on collected samples.
- Method Comparison Test Set:
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. For in vitro diagnostic (IVD) assays like this, the "ground truth" is typically established by the reference method (the predicate device in this case) or known assayed values of controls, rather than human expert consensus.
-
Adjudication method for the test set: Not applicable. As this is an IVD assay evaluation, there is no human adjudication process involved in establishing ground truth for the samples. The predicate device's results serve as the comparison point.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No. This is an evaluation of an in vitro diagnostic reagent, not a medical imaging or interpretation device that would involve multiple human readers.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, the entire evaluation is for the standalone performance of the Dri-STAT® ACP Reagent on Synchron Systems as an automated assay. There is no human-in-the-loop component described for its operation or result generation.
-
The type of ground truth used:
- Method Comparison: The "ground truth" or reference values are the results obtained from the predicate device (Dri-STAT® ACP Reagent on Cobas Fara).
- Imprecision: "Ground truth" for controls are their known assayed values, and for human pools, the mean determined value from multiple measurements.
-
The sample size for the training set: Not applicable. For IVD reagents, there isn't a "training set" in the machine learning sense. The device's performance characteristics are established through analytical validation studies (method comparison, linearity, imprecision), not by training an algorithm on a dataset. The reagent formulation and instrument settings are designed and optimized by the manufacturer, rather than "trained."
-
How the ground truth for the training set was established: Not applicable, as there is no training set in this context.
Ask a specific question about this device
Page 1 of 1