Search Results
Found 2 results
510(k) Data Aggregation
(80 days)
The UniCel DxC SYNCHRON Systems are fully automated, computer-controlled clinical chemistry analyzers intended for the in vitro determination of a variety of general chemistries, therapeutic drugs, and other chemistries of clinical interest in biological fluids such as serum, plasma, urine, or cerebrospinal fluid, (sample type is chemistry dependent).
The UniCel DxC 600 and 800 Systems are the next generation of clinical chemistry analyzers in Beckman Coulter's SYNCHRON instrument family. The analyzers operate in conjunction with reagents, calibrators, and controls designed for use with SYNCHRON Systems. The DxC instruments feature bar code identification of samples and reagents, Closed Tube Sampling, Obstruction Detection and Correction, and a dual carousel reagent storage compartment with an onboard capacity of 59 cartridges. Major system components include sample and reagent handling systems, bar code readers, modular chemistry sections, cartridge chemistry systems, and reagent storage compartment, supported by power and hydropneumatic utilities.
The provided text describes the UniCel® DxC SYNCHRON® Clinical Systems (UniCel DxC 600 and 800) and their substantial equivalence to predicate devices, focusing on the performance of various chemistry assays.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document provides summary performance data comparing the UniCel DxC 800 System to the SYNCHRON LX20 PRO (predicate device) through method comparison studies (slope, intercept, and correlation coefficient (R)) and imprecision studies (mean, within-run SD, within-run %CV, total SD, total %CV). The document implies that the acceptance criteria are met if the values obtained from the UniCel DxC 800 are similar to or within acceptable ranges compared to the predicate device, demonstrating "substantial equivalence."
A direct table of explicit acceptance criteria (e.g., "Slope must be between 0.95 and 1.05") is not explicitly stated in detail for each analyte. Instead, the reported performance (slope, intercept, R, and imprecision) is the evidence presented to demonstrate that the device meets the implied acceptance of being substantially equivalent to the predicate device.
Implied Acceptance Criteria (based on comparison to predicate performance) and Reported Device Performance:
Chemistry | Performance Metric | Implied Acceptance Criteria (e.g., close to 1 for slope, close to 0 for intercept, close to 1 for R, low %CV) | Reported UniCel DxC 800 Performance |
---|---|---|---|
Modular Assays | (Comparison to predicate) | ||
NA | Slope | Expected ~1 | 0.987 |
Intercept | Expected ~0 | 1.99 | |
R | Expected ~1 | 0.996 | |
K | Slope | Expected ~1 | 0.993 |
Intercept | Expected ~0 | 0.07 | |
R | Expected ~1 | 0.998 | |
CL | Slope | Expected ~1 | 1.005 |
Intercept | Expected ~0 | -0.86 | |
R | Expected ~1 | 0.997 | |
CO2 | Slope | Expected ~1 | 1.043 |
Intercept | Expected ~0 | -1.05 | |
R | Expected ~1 | 0.994 | |
CAL | Slope | Expected ~1 | 1.007 |
Intercept | Expected ~0 | -0.03 | |
R | Expected ~1 | 0.999 | |
ALBm | Slope | Expected ~1 | 0.990 |
Intercept | Expected ~0 | 0.05 | |
R | Expected ~1 | 1.000 | |
BUNm | Slope | Expected ~1 | 0.985 |
Intercept | Expected ~0 | 0.31 | |
R | Expected ~1 | 1.000 | |
CREm | Slope | Expected ~1 | 1.037 |
Intercept | Expected ~0 | -0.01 | |
R | Expected ~1 | 1.000 | |
GLUm | Slope | Expected ~1 | 1.006 |
Intercept | Expected ~0 | -0.11 | |
R | Expected ~1 | 1.000 | |
PHOSm | Slope | Expected ~1 | 1.004 |
Intercept | Expected ~0 | 0.02 | |
R | Expected ~1 | 0.999 | |
TPm | Slope | Expected ~1 | 0.992 |
Intercept | Expected ~0 | 0.08 | |
R | Expected ~1 | 0.996 | |
Cartridge Assays | (Comparison to predicate) | ||
CRPH | Slope | Expected ~1 | 1.024 |
Intercept | Expected ~0 | -0.03 | |
R | Expected ~1 | 0.999 | |
FE | Slope | Expected ~1 | 1.002 |
Intercept | Expected ~0 | -0.16 | |
R | Expected ~1 | 1.000 | |
LD | Slope | Expected ~1 | 1.005 |
Intercept | Expected ~0 | 5.54 | |
R | Expected ~1 | 0.999 | |
MG | Slope | Expected ~1 | 0.969 |
Intercept | Expected ~0 | 0.04 | |
R | Expected ~1 | 0.999 | |
PHE | Slope | Expected ~1 | 0.981 |
Intercept | Expected ~0 | 0.02 | |
R | Expected ~1 | 0.998 | |
URIC | Slope | Expected ~1 | 1.017 |
Intercept | Expected ~0 | -0.08 | |
R | Expected ~1 | 1.000 | |
Qualitative Drug Assay (urine) - BENZ | |||
Agreement | Expected 100% | 100% (for positive and negative samples) | |
(43 positive, 57 negative) | |||
Imprecision (Various Chemistries at Low/High Control Levels) | Expected low %CV | Reported %CV (e.g., NA Low - 0.6% within-run, 0.9% total) |
2. Sample Sizes Used for the Test Set and Data Provenance
The "test set" consists of human biological fluid samples (serum, plasma, urine, cerebrospinal fluid, depending on the chemistry).
- Sample Sizes (N) for Method Comparison Studies:
- Modular Assays: Range from 111 (BUNm) to 219 (CO2).
- Cartridge Assays: Range from 91 (PHE) to 181 (LD).
- Qualitative Drug Assay (BENZ): 43 positive samples and 57 negative samples (total 100).
- Sample Sizes (N) for Imprecision Studies:
- The table states "Unicel 800 System Estimated Serum Imprecision (N=80)". This refers to 80 replicates per control level, per analyte, for the imprecision study.
- Data Provenance: Not explicitly stated (e.g., country of origin). The studies are internal ("Summary of Performance Data" is submitted by Beckman Coulter). The types of samples (serum, plasma, urine, CSF) indicate they are from human subjects, likely clinical samples. No indication of retrospective or prospective is given, but typically such studies for regulatory submissions would be prospectively collected or a well-defined retrospective cohort.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. For these types of in vitro diagnostic devices, "ground truth" is typically established by comparing the device's results to a legally marketed predicate device (as done here with the SYNCHRON LX20 PRO Systems) or a reference method. It's unlikely human experts were establishing a qualitative "ground truth" for quantitative chemistry assays in the way they might for an imaging AI device. The predicate device's measured values served as the reference for method comparison.
4. Adjudication Method for the Test Set
This information is not applicable/provided in the context of this type of analytical performance study. Adjudication methods (like 2+1, 3+1) are typically used in clinical studies where human readers are interpreting images or making diagnoses, and discrepancies need to be resolved. For analytical performance of chemistry analyzers, the "accuracy" is determined by comparing measured values to a reference method (the predicate device) or known concentrations in controls/calibrators, not by human adjudication of qualitative results.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study is not applicable to this type of device. This device is a clinical chemistry analyzer, not an AI or imaging diagnostic device that assists human readers in interpretation. Therefore, there is no "human reader improvement with AI vs without AI assistance" to report.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Yes, the performance data presented (method comparison and imprecision) represents the standalone performance of the UniCel DxC SYNCHRON Clinical Systems (the "algorithm/device only" in this context). The device directly measures chemical analytes in biological samples and provides quantitative or qualitative results without human-in-the-loop interpretation of the measurement itself. The human "in the loop" would be the lab technician operating the machine and interpreting the numerical results in the context of a patient's clinical picture.
7. The Type of Ground Truth Used
For the method comparison studies, the "ground truth" or reference was the results obtained from the predicate device (SYNCHRON LX®20 PRO Systems). The study aimed to show that the new device's measurements correlated well with, and were equivalent to, the established predicate device's measurements.
For the imprecision studies, the ground truth was the known concentrations/activities of the control materials used. The goal was to show that the device produced consistent and reproducible results around these known values.
8. The Sample Size for the Training Set
This information is not applicable/provided in the context of this traditional in vitro diagnostic device submission. These chemistry analyzers are not "trained" in the machine learning sense. Their performance is based on established electrochemical or photometric principles, reagent chemistry, and instrument calibration. There isn't a "training set" of data used to develop an algorithm in the way an AI model would be trained. The development and calibration processes are based on analytical chemistry principles and established quality control practices.
9. How the Ground Truth for the Training Set Was Established
As noted above, there isn't a "training set" with ground truth in the AI sense for this device. The development and validation of the device would have involved:
- Analytical Chemistry Principles: Calibrators with known, highly accurate concentrations are used to establish a calibration curve for each assay. These calibrators represent the "ground truth" for the instrument's quantitative measurements.
- Quality Control Materials: Internal and external quality control materials with known target ranges are regularly run to verify the instrument's ongoing performance.
- Predicate Device Comparison: The "ground truth" for demonstrating substantial equivalence for the new device's performance often comes from comparing its results to results generated by a legally marketed predicate device using the same or similar samples.
The "software development statement of compliance" mentions "Validation" and "acceptance criteria" for the software, but this refers to the functionality and reliability of the software controlling the instrument, not the training of an AI algorithm based on clinical data.
Ask a specific question about this device
(109 days)
The Roche Diagnostics OMNI S Analyzer is a fully automated critical care analyzer intended to be used for the measurement of pH, PO2, PCO2, sodium, potassium, ionized calcium, chloride, hematocrit, glucose, lactate, urca/BUN, bilirubin, total hemoglobin, oxygen saturation, oxyhemoglobin, deoxyhemoglobin, carboxyhemoglobin and methemoglobin in samples of whole blood, serum, plasma and aqueous solutions as appropriate.
The Roche Diagnostics OMNI S Analyzer is a fully automated critical care analyzer intended to be used for the measurement of pH, PO2, PCO2, sodium, potassium, ionized calcium, chloride, hematocrit, glucose, lactate, urea/BUN, bilirubin. hemoglobin, oxygen saturation, oxyhemoglobin, total deoxyhemoglobin, carboxyhemoglobin and methemoglobin in samples of whole blood, serum, plasma and aqueous solutions as appropriate.
The provided text describes a 510(k) submission for a bilirubin assay rather than a medical device or AI algorithm with performance metrics like sensitivity, specificity, or AUC. The document focuses on establishing substantial equivalence to predicate devices for regulatory clearance, not on demonstrating performance against clinical acceptance criteria in the way an AI diagnostic would.
Therefore, many of the requested categories for AI device studies (e.g., sample size of test set, number of experts, adjudication method, MRMC studies, standalone performance, training set details) are not applicable or cannot be extracted from this type of regulatory submission. The document primarily discusses method comparison studies with existing commercial assays.
Here's an attempt to answer the questions based only on the provided text, highlighting the limitations:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative "acceptance criteria" for the bilirubin assay in terms of diagnostic performance (e.g., sensitivity, specificity, or accuracy targets). Instead, it discusses "acceptable performance" through method comparison studies with predicate devices.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Substantially equivalent to legally marketed predicate devices | "acceptable performance versus other analyzers" in method comparison studies |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states, "The bilirubin parameter for use on the OMNI S Analyzer was compared to several legally marketed analyzers in the method comparison studies." However, it does not specify the sample size for these method comparison studies or the data provenance (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. For laboratory assays like this, "ground truth" is typically established by reference methods or validated predicate devices, not by expert interpretation in the same way as imaging diagnostics. The text does not mention any human experts establishing ground truth for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. Adjudication methods are typically used in imaging or clinical studies where subjective human interpretation needs to be reconciled. For a bilirubin assay, results are quantitative measurements, and reconciliation would involve comparing numerical values, not subjective interpretations. The text does not mention any adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is a laboratory assay for measuring a biomarker, not an AI-assisted diagnostic device that would involve human readers or image interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to a standalone device, which is what the OMNI S Analyzer with the bilirubin assay is. It operates without human interpretation of results in the diagnostic pipeline beyond reading the numerical output. The entire development process would inherently evaluate its standalone performance by comparing its results to predicate devices.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for this bilirubin assay is established by comparison to results obtained from legally marketed predicate bilirubin assays on other analyzers (Roche Hitachi Analyzers, Radiometer ABL735, Beckman LX®20 System, Kodak Vitros System). These predicate devices are considered the "truth" for establishing equivalence.
8. The sample size for the training set
Not applicable. This is a chemical assay, not an AI algorithm. There is no concept of a "training set" in the context of developing this type of device.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
Page 1 of 1