Search Results
Found 1 results
510(k) Data Aggregation
(254 days)
UA-Cellular® Complete is an assayed chemistry and cellular urine control for evaluating the accuracy and precision of automated procedures that measure urinary sediment and chemistry parameters on the Sysmex® UF-1000i™ Automated Urine Particle Analyzer and the Siemens® Clinitek Atlas Automated Urine Chemistry Analyzer utilizing the Clinitek Atlas 10 Reagent Pak.
The list of assayed parameters includes:
Sysmex UF-1000i:RBC(/μL), WBC//μL), Epithelial (/uL), Cast, Bacteria (/uL), Crystals, Conductivity (mS/cm)
Siemens Clinitek Atlas with Atlas 10 Reagent Pak: Glucose (mg/dL), Bilirubin (As Measured), Ketones (mg/dL), Specific Gravity (As Measured), Blood (As Measured), pH (As Measured), Protein (mg/dL), Urobilinogen (EU/dL), Nitrite (As Measured), Leukocytes (As Measured), Color (As Measured), Clarity (As Measured)
UA-Cellular Complete is an in vitro diagnostic control prepared from stabilized mammalian red blood cells and white blood cells, stabilized bacteria, and simulated urine sediments in a preservative medium. Analyte levels are adjusted with appropriate chemicals.
Here's an analysis of the provided text regarding the acceptance criteria and study for the UA-Cellular® Complete device:
The provided text describes UA-Cellular® Complete, a quality control material, and its performance studies. It is important to note that this device is a control material for other diagnostic devices, not a diagnostic device itself that processes patient data. Therefore, the "acceptance criteria" and "study" described are focused on substantiating the control material's stability, accuracy (value assignment), and precision when used with the specified automated urine analyzers. The typical metrics for AI systems (like sensitivity, specificity, AUC) are not applicable here.
Here's the breakdown of the information requested, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this control material are implicitly derived from its intended use to evaluate the accuracy and precision of automated procedures. The "reported device performance" refers to the successful demonstration of these criteria.
| Study Type | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Value Assignment | Assay data collected met the ±3 Standard Deviation requirements. | Data collected across three external sites and internally at Streck, using three separately manufactured lots. Each site provided a 10-run reproducibility study (n=40 per level). Four instruments and four operators were utilized. All data met the ±3 Standard Deviation requirements. |
| Open-Vial Stability | All values collected were within the assigned assay range. | Real-time data collected internally at Streck across three separately manufactured lots. One operator and one instrument for each type of analyzer were used. Values collected over the last 30 days of product dating were compared to Day 0 assayed values and were all within the assigned assay range. |
| Closed-Vial Stability | Data collected was within the documented assay ranges. | Verified using three separately manufactured lots, collected internally at Streck by one operator, using one instrument for each analyzer type. All data collected was within the documented assay ranges. |
| Precision Performance | All data for this study fell within the assigned assay values for the product. | Data collected at three external sites and internally at Streck across three separately manufactured lots. Each site provided a 10-run reproducibility study (n=40 per level). Four instruments and four operators were utilized. All data fell within the assigned assay values. |
2. Sample Size Used for the Test Set and Data Provenance
The concept of a "test set" as typically understood for an AI algorithm (a distinct dataset for final performance evaluation) doesn't directly apply here, as this is a control material. Instead, the studies involved collecting data on the performance of the control material.
-
Value Assignment and Precision Performance:
- Sample Size: Each of the three external sites and the internal Streck site performed a "10-run reproducibility study for the tri-level control on each lot (n=40 per level)". Given three lots and four sites, this would be: 3 lots * 4 sites * 40 runs/level = 480 data points per level for value assignment and precision. Since it's a tri-level control, the total number of individual runs would be 3 (levels) * 480 = 1440 runs.
- Data Provenance: Data was collected "across three external sites and data collected internally at Streck." The text does not specify the country of origin for the external sites, nor does it explicitly state if the data was retrospective or prospective, though "real-time" is mentioned for stability testing, suggesting prospective collection.
-
Open-Vial Stability and Closed-Vial Stability:
- Sample Size: Not explicitly stated as "n=X" in the same way as value assignment. However, data was collected "real time across three separately manufactured lots." and over "the last 30-days of product dating" for open-vial stability. For closed-vial, it was verified "utilizing three-separately manufactured lots." The sample size here refers to the number of measurements taken over time for each lot, but precise numbers are not given.
- Data Provenance: "All data was collected internally at Streck." This suggests the US. The "real-time" aspect indicates prospective data collection.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Ground truth concept for a control material: For a control material, the "ground truth" is not established by expert consensus on clinical findings, but rather by the assigned value range for each parameter, which is determined through a rigorous value assignment study.
- Number of 'experts' / operators: The Value Assignment and Precision Performance studies each utilized "Four operators." While their specific qualifications (e.g., medical technologist, lab technician) are not stated, they are implied to be qualified laboratory personnel capable of operating the analyzers and performing reproducibility studies.
4. Adjudication Method for the Test Set
The concept of an "adjudication method" (like 2+1 or 3+1) is typically used when multiple human readers interpret an image or clinical finding to establish a consensus ground truth, especially for diagnostic AI applications. This is not applicable here as:
- The device is a control material.
- The "ground truth" (assigned values) are established through a statistical process (meeting ±3 Standard Deviation requirements) using instrumental measurements, not human interpretation requiring adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study was not done. This type of study assesses how AI assistance impacts human reader performance, which is not applicable to a quality control material.
6. Standalone Performance (Algorithm Only Without Human-in-the-Loop Performance)
The device itself is a control material that works with automated analyzers. Its "performance" is inherently its interaction with the automated systems.
The studies described (Value Assignment, Stability, Precision) are effectively evaluating the standalone performance of the control material when run by the automated analyzers (without manual data adjustment or human decision-making influencing the measurement itself, beyond typical operational procedures). The "operators" are running the instruments, not interpreting the control material results to make a diagnosis or similar complex decision.
7. Type of Ground Truth Used
For the UA-Cellular® Complete control material, the "ground truth" for its performance is defined by:
- Assigned Assay Range/Values: This is essentially a statistically determined target range or value for each parameter, established through robust multi-site, multi-lot studies (Value Assignment study) meeting specific statistical criteria (e.g., ±3 Standard Deviation requirements). This is a form of quantifiable reference value based on instrument measurement and statistical analysis.
8. Sample Size for the Training Set
The concept of a "training set" is not applicable here. This device is a physical control material, not an AI algorithm that is trained on data. The studies described are for validation and characterization of the control material's properties (stability, assigned values, precision) with specific predicate devices, not for training an algorithm.
9. How the Ground Truth for the Training Set Was Established
As there is no "training set" for this control material, this question is not applicable.
Ask a specific question about this device
Page 1 of 1