Search Results
Found 1 results
510(k) Data Aggregation
(59 days)
UA-Cellular for IQ is an assayed cellular urine control for evaluating the accuracy and precision of automated procedures that measure urinary sediment parameters.
UA-Cellular for IQ is a urinalysis control which contains stabilized human red and white blood cells and other inert particles in a preservative medium. UA-Cellular for IQ is a urine control for the IRIS iQ® 200 analyzer. The product is packaged in plastic bottles containing 120ml. The closures are polypropylene screw caps with polyethylene liners. There are two different levels: level 1 and level 2. The bottles will be packaged in a box with the package insert / assay sheet. The product must be stored at 2 - 10° C.
Here's an analysis of the provided text regarding the acceptance criteria and the study that proves the device meets those criteria:
Device: UA-Cellular™ for IQ (Urinalysis Control)
-
A table of acceptance criteria and the reported device performance:
The document does not explicitly list "acceptance criteria" in a quantitative manner (e.g., "stability must be within X% of initial value"). Instead, the acceptance is implicitly tied to demonstrating performance (reproducibility and stability) that is "consistently reproducible, substantially equivalent to the predicate product and stable for the shelf life claimed."
Given the information provided, here's a table based on the claims and tested aspects:
Acceptance Criterion (Implicit) Reported Device Performance Reproducibility (Consistent Run-to-Run Performance) Demonstrated consistent reproducibility. Closed Vial Stability (Shelf Life) Claimed: 60 days
Demonstrated: Stable for a 60-day closed vial period. Results confirm lot-to-lot consistency. |
| Open Vial Stability | Claimed: 30 days
Demonstrated: Stable for a 30-day open vial period. |
| Substantial Equivalence to Predicate Product (Cell-Chex Auto) | "All testing showed that UA-Cellular for IQ is ... substantially equivalent to the predicate product." While specific metrics for equivalence are not detailed, the implication is that its performance characteristics (reproducibility, stability) are comparable enough to warrant substantial equivalence according to FDA. |
| Evaluation of accuracy and precision of automated procedures | Intended for this purpose, and studies presumably confirmed its suitability, though specific data on this evaluation is not provided in a summarized form. |
-
Sample size used for the test set and the data provenance:
- Sample Size for Test Set: The document mentions "Three types of studies were conducted," but does not specify the sample size (e.g., number of vials, number of runs, number of instruments) used for the Run-to-Run Reproducibility, Open Vial Stability, or Closed Vial Stability tests.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether the studies were retrospective or prospective. Given the submitter's address (Omaha, NE, USA) and the context of a 510(k) submission, it is highly probable the studies were conducted in the United States and were prospective in nature, designed specifically to support the 510(k) application.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This device is a quality control material for automated urinalysis. The concept of "ground truth" established by experts (like radiologists for imaging) does not directly apply here. The "ground truth" for a control material is its formulated value and its performance over time against known stable parameters, measured by the automated system it's designed to control. The accuracy and precision of the automated procedures themselves are what the control then assesses. The document does not mention the use of experts to establish a "ground truth" for the control material itself. Its performance is measured empirically.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable. Adjudication methods like 2+1 or 3+1 are typically used in clinical studies where multiple human readers interpret data, and discrepancies are resolved by an adjudicator. This document describes performance studies for a laboratory control material, not clinical interpretation by multiple readers.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. This device is a laboratory control material, not an AI-powered diagnostic device or a system designed to assist human readers. Therefore, an MRMC study and the concept of human reader improvement with AI assistance are irrelevant to this submission.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Not applicable. This device is a physical control material, not an algorithm. Its performance is inherent to its formulation and stability characteristics when run on an automated instrument, which is by definition "standalone" in this context as it functions independently of human interpretation of its internal characteristics. Its purpose is to control the performance of an automated instrument.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
For this type of device (a quality control material), the "ground truth" is typically established by:
- Formulation specifications: The known concentration and composition of stabilized cells and other particles introduced during manufacturing.
- Reference Method/Assay: The material's performance is likely measured against a highly accurate and precise reference method or instrument to establish its initial target values and assess its stability over time.
- Predicate Device Performance: Establishing "substantial equivalence" implies that the new device's performance aligns with the established performance characteristics of the predicate device (Cell-Chex Auto).
The document doesn't explicitly detail the specific reference methods or how the "ground truth" values for the control material's components were initially determined, but it would be based on validated analytical techniques.
-
The sample size for the training set:
Not applicable. This is a laboratory control material, not a machine learning model. Therefore, there is no "training set" in the context of AI/algorithms. The manufacturing process is designed and validated, but not "trained" in the AI sense.
-
How the ground truth for the training set was established:
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
Page 1 of 1