Search Results
Found 333 results
510(k) Data Aggregation
(29 days)
JJX
This CalCheck set is an assayed control for use in calibration and for use in the verification of the assay range established by the Elecsys ß-CrossLaps/serum reagent on the cobas e immunoassay analyzers.
The ß-CrossLaps CalCheck 5 is used for the calibration verification and the assessment of the measuring range as needed by the laboratory certification agencies such as College of American Pathologists or CLIA certification. The CalChecks are a customer convenience product and not required to assess the performance.
The provided document describes the ß-CrossLaps CalCheck 5, a quality control material intended for calibration verification and assay range verification on cobas e immunoassay analyzers.
Here's an analysis of the acceptance criteria and the studies performed:
1. Table of Acceptance Criteria and Reported Device Performance:
Study/Parameter | Acceptance Criteria | Reported Device Performance (as per studies presented) |
---|---|---|
Value Assignment | Assigned range for Levels 2-5: ±27% of the assigned value. | |
Target Value for Check 1: ≤0.05 ng/mL | The CalChecks are run in duplicate on at least two (2) modules (each with two measuring cells) of the cobas e 801 with at least two runs. The assigned value is the median of at least six (6) determinations. The acceptance criterion for the assigned range is applied. | |
Open Vial Stability | CalCheck Level 1: |
Ask a specific question about this device
(162 days)
JJX
The Elecsys CYFRA 21-1 CalCheck 5 is an assayed control for use in calibration verification and for use in the verification of the assay range established by the Elecsys CYFRA 21-1 reagent on the indicated Elecsys and cobas e immunoassay analyzers.
The Elecsys CYFRA 21-1 CalCheck 5 is a lyophilized product consisting of cytokeratin in a human serum matrix. During manufacture, the analyte is spiked into the matrix at the desired concentration levels.
CYFRA 21-1 CalCheck 5 is a lyophilized human serum matrix with added cytokeratin in five concentration ranges. The CalCheck includes:
- o CYFRA 21-1 CalCheck 1: approximately
This document describes the premarket notification (510(k)) for the Elecsys CYFRA 21-1 CalCheck 5, a control device used for calibration verification and assay range verification for the Elecsys CYFRA 21-1 reagent on specific immunoassay analyzers.
Here's an analysis of the provided text in relation to your request:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Value Assignment: Assigned value for each CalCheck level on cobas e 601/MODULAR ANALYTICS E170 analyzers defined as the mean value obtained over at least six determinations (duplicate runs on at least three analyzers). | Value assignment testing was conducted and passed pre-defined acceptance criteria. The assigned values are published. |
Assigned Range: For levels 2-5, the assigned range is calculated as ±21% of the assigned value. | The label states that each laboratory should establish appropriate acceptance criteria when using this product for its intended use. |
Cross-Platform Comparability (cobas e 411 vs. cobas e 601): Mean value obtained on the additional analyzer (cobas e 411) must be within 10% of the master platform (cobas e 601) assigned value. | This acceptance criterion was met, deeming assigned values from the master platform valid for Elecsys 2010, MODULAR ANALYTICS E170, cobas e 411, cobas e 601, and cobas e 602 immunoassay analyzers. |
Accelerated Stability (3 weeks at 35°C): Recovery of the stressed material compared to freshly reconstituted material was calculated. (Specific numerical acceptance criteria for recovery are not provided in the document, only that the study was conducted and recovery was calculated). | The study was conducted. No specific numerical performance values for recovery are explicitly stated as "met" or "non-met" in this summary section, only that the study was performed. |
On-board Stability (6 hours at 20-25°C): Test material compared as a % to freshly reconstituted reference. (Specific numerical acceptance criteria for % comparison are not provided). | The study was conducted. No specific numerical performance values for % comparison are explicitly stated. |
Real-Time Stability (Transferred from CalSet material): Stored CalSet reagents tested at T=0 and specified intervals over shelf life + one month. (Implies performance within expected ranges, but specific criteria are not listed). | The study was conducted, and the results are transferable to the CalCheck material to support a 12-month shelf life claim. |
2. Sample sizes used for the test set and the data provenance
- Value Assignment: For each Elecsys CYFRA 21-1 CalCheck 5 lot, each CalCheck level is run in duplicate on at least three cobas e 601/MODULAR ANALYTICS E170 analyzers. This means at least 6 determinations per CalCheck level per lot. The document implies this data is generated internally by Roche Diagnostics, likely in laboratories in the USA (Indianapolis), Germany (Mannheim, Penzberg) based on the establishment registration. This is prospective data generation for product release and verification.
- Accelerated Stability: "One CYFRA 21-1 CalCheck 5 lot was evaluated on one cobas e 411 analyzer." Tests were performed in duplicate. This is likely prospective data.
- On-board Stability: "One CYFRA 21-1 CalCheck 5 lot was evaluated on one cobas e 411 analyzer." Samples tested in duplicate. This is likely prospective data.
- Real-Time Stability: Conducted on the CYFRA 21-1 CalSet material. "Stored CalSet reagents were tested at time point T=0 and at specified intervals over the shelf life of the device up to the planned shelf life plus one month." No specific sample size (number of lots or tests) is provided for the CalSet study. The data provenance is internal to Roche Diagnostics, prospective for shelf-life determination.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This section describes a quality control material, not a diagnostic device that relies on interpretation of data by experts. The "ground truth" here is the assigned value of the analyte concentration in the control material. This is established through a quantitative measurement process on pre-defined analyzers rather than expert consensus on, for example, medical images or patient diagnoses. Therefore, the concept of "experts establishing ground truth" as typically applied to image-based AI or clinical decision support AI is not directly applicable. The "experts" in this context would be the skilled laboratory personnel who perform the assays according to established protocols and the statisticians/scientists who determine the assigned values based on the collected analytical data. No specific number or qualifications are mentioned for such roles.
4. Adjudication method for the test set
Not applicable for a quality control material where quantitative analytical results are compared against pre-defined ranges and values. Adjudication methods like 2+1 or 3+1 are typically used for subjective assessments where multiple readers provide independent interpretations.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This device is a quality control material used to verify the performance of an immunoassay analyzer and its reagents, not an AI-powered diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is a standalone device in the sense that it is a physical control material. The performance of the device itself (its stability and assigned values) is evaluated, not an algorithm's classification capabilities. It is used with an immunoassay analyzer, but its own performance doesn't involve an "algorithm only" classification.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for this device is the analytically determined assigned value of the cytokeratin concentration within the control material. This is established through rigorous laboratory testing using calibrated instruments and methods, rather than expert consensus, pathology, or outcomes data.
8. The sample size for the training set
Not applicable. This device is a physical control material, not a machine learning model that requires a training set. The reference to "master platform assigned value" and "additional analyzer" implies a validation step, but not a training process in the AI sense.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(48 days)
JJX
Multichem Alc control is intended for use as an assayed quality control to monitor the precision of laboratory testing procedures for the analyte, HbA1c, as listed in the package insert.
The use of quality control materials is indicated as an objective assessment of the precision of methods and techniques in use and is an integral part of good laboratory practices. A minimum of two levels of control are available to allow performance monitoring within the clinical range of HbA1c assay method. Multichem A1c control is prepared from human red blood cells with added chemicals and stabilizers. The control is provided in liquid form for convenience.
The provided document describes the Multichem A1c device, which is an assayed quality control material for monitoring the precision of laboratory testing procedures for HbA1c. The study described focuses on the stability and value assignment of this quality control material, rather than the performance of a diagnostic algorithm or device in detecting disease. Therefore, many of the requested categories for a diagnostic device (like sample size for test set, data provenance, number of experts for ground truth, adjudication method, MRMC study, and standalone performance) are not applicable in the traditional sense.
Here's the information that can be extracted and presented based on the context of a quality control device:
1. Acceptance Criteria and Reported Device Performance
Device: Multichem A1c (Assayed) Control
Purpose: Quality Control Material for HbA1c testing.
Acceptance Criteria Category | Specific Criteria | Reported Device Performance (Summary) |
---|---|---|
Value Assignment Ranges | Level 1: |
- % HbA1c (DCCT/NGSP): Target 4.00-7.00%
- A1c (IFCC mmol/mol): Target 20.2-53.0 mmol/mol
Level 2: - % HbA1c (DCCT/NGSP): Target 8.00-13.00%
- A1c (IFCC mmol/mol): Target 63.9-119.0 mmol/mol | Value assignments performed successfully on Tosoh, Beckman, and Trinity analysers, with targets and ranges established for each. Specific targets and ranges for each level and analyzer combination are provided in Section 8.0. The control ranges are set based on 3 standard deviations of imprecision with a minimum applied range of ±10%. |
| Open Vial Stability | Maximum Allowable Degradation / Drift Limit
Ask a specific question about this device
(33 days)
JJX
The Audit® MicroControls™ Linearity DROP LQ Blood Glucose is intended to simulate human patient samples for use as assayed quality control material, determining linearity, calibration, and the verification of reportable range for the glucose analyte.
The Audit® MicroControls™ Linearity DROP LQ Blood Glucose is for In Vitro Diagnostic use only.
The Audit® MicroControls™ Linearity DROP LQ Blood Glucose is an in-vitro diagnostic device consisting of sets of 5 levels of liquid, linearity material and additives in human based serum. The product contains the following analyte: glucose. Each set consists of 5 levels labeled Level A, B, C, D and E. Each level has a fill size of 1ml. Materials of human origin used in the manufacture of this linearity set have been tested using FDA approved methods and are found to be non-reactive for HbsAg and antibodies to HCV and HIV-1/2.
Here's a breakdown of the acceptance criteria and study information for the Audit® MicroControls™ Linearity DROP LQ Blood Glucose device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document provides acceptance criteria for stability claims rather than performance metrics like sensitivity or specificity.
Acceptance Criteria Category | Specific Criteria | Reported Device Performance/Met |
---|---|---|
Shelf Life | 2 years, when stored unopened at 2-8º C | Met |
Open Vial Stability | 7 days, when stored tightly capped at 2-8º C | Met |
2. Sample Size Used for the Test Set and Data Provenance
- The document mentions that multiple measurements were taken for the analyte at each level during value assignment and stability studies. However, specific sample sizes for a "test set" (e.g., number of unique samples or runs for a formal validation study) are not explicitly provided.
- Data Provenance: The studies were conducted internally by Aalto Scientific, Ltd. The data is retrospective for the completed accelerated stability study and real-time open vial stability, and prospective for the ongoing real-time shelf-life study. The country of origin of the data is not explicitly stated but can be inferred as the USA, where Aalto Scientific, Ltd. is located.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
- This information is not applicable in the context of this device. The device is a quality control material used to verify the linearity, calibration, and reportable range of blood glucose measurement systems. The "ground truth" for this type of device is established through analytical value assignment based on measurements performed on a reference instrument, not through human expert interpretation of results.
- The document states: "Analyte value assignment for Level A through Level E was performed on Roche Cobas for the blood glucose analyte using the corresponding reagent. The analyte was measured multiple times. The mean value of the analyte was used to establish a target concentration value at each level."
4. Adjudication Method for the Test Set
- Not applicable. As the device is a quality control material and "ground truth" is established through analytical measurement, there is no human-based adjudication process for a test set in the conventional sense. The "adjudication" is inherent in the analytical measurement and statistical determination of mean values.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not done. This type of study is typically relevant for interpretative diagnostic devices (e.g., imaging AI) where human readers are making diagnoses. The Audit® MicroControls™ Linearity DROP LQ Blood Glucose is a quality control material for analytical instruments, not an interpretative device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Not applicable. This device is a control material, not an algorithm. Therefore, "standalone performance" in the context of an algorithm's accuracy without human intervention is not relevant. Its performance is measured by its stability and its ability to produce expected values when analyzed by a blood glucose measurement system.
7. The Type of Ground Truth Used
- The ground truth (or target values) for the device's glucose levels was established through analytical measurement on a reference instrument (Roche Cobas c501) with multiple measurements and calculation of the mean value. This is essentially an instrument-based reference value rather than expert consensus, pathology, or outcomes data.
8. The Sample Size for the Training Set
- Not explicitly stated and not directly applicable as this is a physical control material, not an AI/ML algorithm. The concept of a "training set" is typically used for machine learning models. For this device, the equivalent would be the data used for the "value assignment" to establish its target concentrations. The document mentions "The analyte was measured multiple times" for value assignment, but a specific number is not given.
9. How the Ground Truth for the Training Set Was Established
- As explained in Point 7, the "ground truth" (target values) for the device's five levels (A-E) was established through repeated analytical measurements of the glucose analyte on a Roche Cobas c501 instrument. The mean of these multiple measurements was used to define the target concentration value for each level. Raw materials are subject to internal quality control.
Ask a specific question about this device
(28 days)
JJX
The LIAISON® EBV IgM Serum Control Set (negative and positive) is intended for use as assayed quality control samples to monitor the performance of the LIAISON® EBV IgM assay on the LIAISON® Analyzer family.
The LIAISON® EBV IgM Serum Control Set (negative and positive) consists of liquid ready-to-use controls in human serum. The negative control is intended to provide an assay response characteristic of negative patient specimens and the positive control is intended to provide an assay response characteristic of positive patient specimens.
The controls are designed for use with DiaSorin LIAISON® EBV IgM assay on the LIAISON® analyzer family.
The provided text describes a 510(k) premarket notification for a medical device: the LIAISON® EBV IgM Serum Control Set. This document is a regulatory filing, not a research paper detailing a study of an AI/ML powered device. As such, many of the requested details regarding acceptance criteria, study design for AI models, human expert involvement, and specific performance metrics for an AI system are not present or applicable.
The document discusses analytical validation of a quality control material for an immunoassay, not an AI/ML algorithm. The "performance data" sections refer to studies demonstrating the utility and stability of this control material itself, rather than testing the performance of an AI system against clinical ground truth.
Therefore, for aspects related to AI/ML device performance validation, the document does not contain the necessary information. I will, however, extract the information that is present and relevant to the closest interpretation of the prompt for a non-AI device.
Here's what can be extracted and what information is missing:
1. A table of acceptance criteria and the reported device performance:
The document mentions "predetermined acceptance criteria" and that the modified device "meets" them, but does not provide a specific table of these criteria or the numerical performance results against them. It lists the types of studies conducted:
Study Type | Reported Performance/Outcome |
---|---|
Commutability (Matrix Effect) | "demonstrate that the modified device meets predetermined acceptance criteria" |
Precision Equivalence | "demonstrate that the modified device meets predetermined acceptance criteria" |
Control Value Assignment | "demonstrate that the modified device meets predetermined acceptance criteria" |
Control Range Definition | "demonstrate that the modified device meets predetermined acceptance criteria" |
Real Time Stability (Shelf-life) | Supports claims: "Shelf-life of 12 months at (2-8°C)" |
Real Time Stability (Open Use) | Supports claims: "Sixteen (16) weeks On-Board/Open Use Stability" (This is an improvement from the predicate's 4 weeks as noted in the "Summary of Similarities and Differences" table). |
2. Sample size used for the test set and the data provenance:
- Sample size for test set: Not explicitly stated. The studies mentioned (Commutability, Precision, Control Value Assignment, Control Range Definition, Stability) would involve a certain number of runs/measurements, but the specific number of "samples" (clinical specimens or control lots) used in these analytical studies is not provided.
- Data Provenance (e.g., country of origin, retrospective/prospective): Not specified. These are analytical studies of a quality control material which typically use manufactured lots of the product and potentially stored clinical samples for commutability, so the concept of "country of origin of data" is less relevant than for patient-derived datasets. Whether the studies were retrospective or prospective is not stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable: For a quality control material, the "ground truth" is typically established by the assigned values and ranges based on the manufacturing process and extensive analytical characterization, not by human experts adjudicating clinical cases.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable: There is no mention of adjudication, as this is pertinent to human review of clinical data, which is not the focus of this device's validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable: This device is a quality control material for an immunoassay, not an AI-powered diagnostic tool. Therefore, MRMC studies or human reader improvement with AI assistance are not relevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable: This is not an AI algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the performance of the control material itself: The "ground truth" is based on analytical characterization of the control material (e.g., assigned values, measurement of precision, stability over time) using the LIAISON® EBV IgM assay and Analyzer family, consistent with established laboratory quality control practices. For commutability, it would likely involve testing of diverse clinical samples along with the controls to ensure they behave similarly.
8. The sample size for the training set:
- Not applicable: As this is not an AI/ML device, there is no concept of a "training set" for an algorithm.
9. How the ground truth for the training set was established:
- Not applicable: See above.
In summary, the provided document details the regulatory approval (510(k)) of a quality control material, not an AI/ML device. Therefore, the questions designed to probe the validation of an AI/ML system are largely not applicable to this document. The "acceptance criteria" and "performance data" mentioned refer to the analytical performance of the control material itself (e.g., its precision, stability, and commutability with patient samples) rather than the clinical performance of a diagnostic or AI system.
Ask a specific question about this device
(28 days)
JJX
The LIAISON® CMV IgM Serum Control Set (negative and positive) is intended for use as assayed quality control samples to monitor the performance of the LIAISON® CMV IgM assay on the LIAISON® Analyzer family.
The LIAISON® CMV IgM Serum Control Set (negative and positive) consists of liquid ready-to-use controls in human serum. The negative control is intended to provide an assay response characteristic of negative patient specimens and the positive control is intended to provide an assay response characteristic of positive patient specimens.
The controls are designed for use with DiaSorin LIAISON® CMV IgM assay on the LIAISON® analyzer family.
The provided text describes a 510(k) premarket notification for a medical device called the "LIAISON® CMV IgM Serum Control Set." This document focuses on the acceptance criteria and the study that proves the device meets those criteria for this specific product.
It's important to understand that this is a quality control material used to monitor the performance of another assay (the LIAISON® CMV IgM assay), not a diagnostic device itself. Therefore, the "performance" here refers to its stability and ability to function as a reliable control, not diagnostic accuracy in identifying patient conditions.
Based on the provided text, here's the information regarding acceptance criteria and the study:
1. A table of acceptance criteria and the reported device performance
The document does not present a formal table of acceptance criteria with specific numerical targets. Instead, it lists the types of studies conducted to demonstrate that the modified device meets "predetermined acceptance criteria" and "design specifications." The reported "performance" is that it successfully met these criteria.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Functional Equivalence/Commutability (between samples and controls; Matrix Effect) | Demonstrated that the modified device meets predetermined acceptance criteria, supporting equivalency of the modified device to the cleared device. |
Precision Equivalence (between samples and controls) | Demonstrated that the modified device meets predetermined acceptance criteria, supporting equivalency of the modified device to the cleared device. |
Control Value Assignment | Demonstrated that the modified device meets predetermined acceptance criteria, supporting equivalency of the modified device to the cleared device. |
Control Range Definition | Demonstrated that the modified device meets predetermined acceptance criteria, supporting equivalency of the modified device to the cleared device. |
Shelf-life Stability (12 months at 2-8°C) | Real Time Stability testing conducted on the LIAISON® CMV IgM Serum Control Set to support the claim. |
On-Board/Open Use Stability (16 weeks) | Real Time Stability testing conducted on the LIAISON® CMV IgM Serum Control Set to support the claim. Specifically, "Once opened controls are stable for sixteen (16) weeks when properly stored at 2-8ºC between uses." |
No new risks or altered safety/effectiveness | Based on findings from validation and verification activities, "the modifications to the LIAISON® CMV IgM Serum Control Set do not introduce any new risks to the performance of the device and do not alter safety and effectiveness." |
Functions as intended and meets design specifications | "Performance testing of the device demonstrates that the device functions as intended, meeting the requirements of design specifications." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample sizes (e.g., number of control sets, number of runs, number of replicates) used for the verification and validation (test set) studies. It also does not provide information on the country of origin of the data or whether the studies were retrospective or prospective. Given that it's a 510(k) for a control material, the studies would typically be prospective laboratory-based verification and validation studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable in the context of this device. The "ground truth" for a quality control material is its chemical and immunological behavior and stability, not a diagnosis or clinical outcome. The validation is based on analytical performance studies, not expert consensus on patient data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. Adjudication methods like 2+1 or 3+1 are used for establishing ground truth in diagnostic studies involving human interpretation (e.g., radiology images), not for the analytical performance of a quality control material.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. MRMC studies are specific to evaluating diagnostic aids, often AI systems, and how they impact human reader performance. This device is a quality control material for an immunoassay, not a diagnostic or AI-powered system that human readers use directly to interpret patient cases.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to the performance of the control material itself in the context of the LIAISON® CMV IgM assay. The studies listed ("Commutability," "Precision," "Control Value Assignment," "Control Range Definition," and "Real Time Stability") are inherently "standalone" in that they evaluate the properties of the control material and its interaction with the analyzer and assay without a human "interpretation" component in the process of the control material functioning. It is not an "algorithm" in the sense of AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The "ground truth" for this quality control material is its pre-established composition and expected analytical performance characteristics based on reference methods and the established performance of the LIAISON® CMV IgM assay it controls. It's an internal, analytical "ground truth" rather than a clinical diagnostic one. For example:
- Commutability/Matrix Effect: Related to how the control material behaves similarly to actual patient samples.
- Precision: How consistently the control material yields results within a defined range.
- Control Value Assignment/Range Definition: Based on a rigorous process to determine the expected signal and acceptable variation for the positive and negative controls.
- Stability: Measured by maintaining specified performance characteristics over time.
8. The sample size for the training set
This is not explicitly stated and is not directly applicable in the sense of a machine learning "training set." The development of the control material and the initial characterization of its performance would involve laboratory experiments, but the document doesn't detail the sample sizes of materials used in that developmental phase. The "study that proves the device meets the acceptance criteria" refers to the verification and validation studies, not an AI training process.
9. How the ground truth for the training set was established
Not applicable for a "training set" in the context of AI. For the development and characterization of the control material, the "ground truth" (i.e., the target characteristics and performance) would have been established through:
- Careful formulation and manufacturing processes.
- Protocols for assigning target values and ranges, often using a large number of replicates and statistical methods.
- Comparison to existing reference materials or established clinical samples to ensure it behaves diagnostically like patient samples (for commutability).
- Controlled environmental conditions and repeated testing for stability.
In summary, this document is a 510(k) for a quality control material, not a diagnostic device that performs clinical interpretation or uses AI. Therefore, many of the questions asked (especially those related to human readers, experts, and AI-specific ground truth/training sets) are not relevant to the type of product described or the studies conducted to support its clearance. The focus is on the analytical performance and stability of the control material itself.
Ask a specific question about this device
(29 days)
JJX
IDS-iSYS 17-OH Progesterone Control Set
The IDS-iSYS 17-OH Progesterone Control Set is for in vitro diagnostic use, for the quality control of the IDS-iSYS 17-OH Progesterone on the IDS-iSYS Multi-Discipline Automated System.
Rx Only.
IDS-iSYS 17-OH Progesterone Calibration Verifiers
The IDS-iSYS 17-OH Progesterone Calibration Verifiers are an in vitro diagnostic device intended for medical purposes in the quantitative verification of assay calibration and measuring range of the IDS-iSYS 17-OH Progesterone assay, when performed on the IDS-iSYS Multi Discipline Automated System.
Rx Only.
The IDS-iSYS 17-OH Progesterone Control Set consists of Two sets of three vials, 1.0 mL each in liquid form. Human serum containing 17-OH Progesterone and sodium azide as a preservative (
The provided text describes the acceptance criteria and supporting studies for two in vitro diagnostic devices: the IDS-iSYS 17-OH Progesterone Control Set and the IDS-iSYS 17-OH Progesterone Calibration Verifiers.
1. Table of acceptance criteria and the reported device performance:
Study Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Control Set | ||
Value Assignment | Assigned target value defined as the mean of all runs for the IDS-iSYS 17-OH Progesterone assay and analyzer. | Expected values: Low control: 2.0 ng/mL, Medium control: 5.0 ng/mL, High control: 10.0 ng/mL. (The study provides these as the expected values, which implies they were achieved, but does not explicitly re-state them as performance against the criteria beyond the definition of the target value.) |
Closed Vial Stability | - Mean concentration must be within QC ranges (as stated in Certificate of Analysis) |
- Precision: CV ≤ 10% for low concentration, ≤ 8% for middle and high concentration | - Accelerated stability studies (CLSI guideline EP25-A) support a stability claim of 9 months when stored at 2-8°C.
- Real-time studies are ongoing. (No explicit statement if current readings meet the mean concentration and precision criteria for the 9-month claim, only that it "supports" it). |
| Open Vial Stability | Percent recoveries within 10% of the reference material concentration. | Data supports the open vial stability claim of 49 days when stored at 2-8°C. (Implies the 10% recovery criterion was met). |
| On-Board Stability | Compared to a reference material run at time 0. (Specific quantitative criteria not explicitly stated, assumes comparison ensures acceptable performance.) | The on-board stability data supports the claimed on-board stability of 4 hours. (Implies acceptable comparison results). |
| Calibration Verifiers | | |
| Value Assignment | Assigned target value defined as the mean of all runs for the IDS-iSYS 17-OH Progesterone assay and analyzer. | Expected values: Cal Ver 0: Undetectable, Cal Ver 1: 2.0 ng/mL, Cal Ver 2: 8.0 ng/mL, Cal Ver 3: 17.0 ng/mL. (Similar to control set, implies these were achieved.) |
| Closed Vial Stability | - Mean concentration must be within QC ranges (as stated in Certificate of Analysis) - Precision: CV ≤ 10% for low concentration, ≤ 8% for middle and high concentration | - Accelerated stability studies (CLSI guideline EP25-A) support a stability claim of 9 months when stored at 2-8°C.
- Real-time studies are ongoing. (No explicit statement if current readings meet the mean concentration and precision criteria for the 9-month claim, only that it "supports" it). |
| On-Board Stability | Compared to a reference material run at time 0. (Specific quantitative criteria not explicitly stated, assumes comparison ensures acceptable performance.) | The on-board stability data supports the claimed on-board stability of 4 hours. (Implies acceptable comparison results). |
2. Sample size used for the test set and the data provenance:
- IDS-iSYS 17-OH Progesterone Control Set - Value Assignment:
- Sample Size: A minimum of 21 runs using cartridge batches (number of batches not specified) tested in triplicate on each of three IDS-iSYS Multi Discipline Automated Systems. Control solutions were prepared gravimetrically.
- Data Provenance: Not explicitly stated as retrospective or prospective, nor country of origin. Given it's a device manufacturer's study to establish product characteristics, it's inherently prospective in nature.
- IDS-iSYS 17-OH Progesterone Control Set - Stability (All types):
- Closed Vial (Real-time): Three lots of controls, tested in pentaplicate (5 replicates) at 2-month intervals for up to a minimum of 15 months. Each tested vial compared to reference material stored at -20°C.
- Closed Vial (Accelerated): Performed according to CLSI guideline EP25-A.
- Open Vial: Tested in duplicate at time points stated in the stability protocol (time points not detailed), against unopened vials.
- On-Board: Three batches of controls, using three IDS-iSYS instruments. Tested at 0, 2, 4, 6, and 8 hours compared to a reference material at time 0.
- Data Provenance: Same as above, likely prospective internal testing.
- IDS-iSYS 17-OH Progesterone Calibration Verifiers - Value Assignment:
- Sample Size: Minimum of five runs for each cartridge batch (number of batches not specified) tested in triplicate on each of three IDS-iSYS Multi Discipline Automated Systems. Calibration Verifier solutions were prepared gravimetrically.
- Data Provenance: Same as above, likely prospective internal testing.
- IDS-iSYS 17-OH Progesterone Calibration Verifiers - Stability (Closed Vial & On-Board):
- Closed Vial (Real-time): Three lots of calibration verifiers, tested in pentaplicate (5 replicates) at 2-month intervals for up to a minimum of 15 months.
- Closed Vial (Accelerated): Performed according to CLSI guideline EP25-A.
- On-Board: Three batches of calibration verifiers, using three IDS-iSYS instruments. Tested at 0, 2, 4, 6, and 8 hours compared to a reference material at time 0.
- Data Provenance: Same as above, likely prospective internal testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
This device is an in vitro diagnostic (IVD) quality control material and calibration verifier for automated systems, not an imaging AI device. The "ground truth" here refers to the reference values established for the control and calibration materials themselves, rather than diagnosis or interpretation by human experts.
- Value Assignment: The "ground truth" (assigned target values) for both the control set and calibration verifiers is established by the mean of all runs from multiple tests on three IDS-iSYS Multi Discipline Automated Systems, using cartridge batches, and confirmed by immunologic analysis using the IDS-iSYS 17-OH Progesterone assay. The solutions themselves are prepared gravimetrically from intermediate stock solutions.
- There are no "experts" in the traditional sense (e.g., radiologists) involved in establishing the ground truth for this type of chemical/immunological assay. The ground truth relies on the precision and accuracy of the analytical methods and the instrument itself.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable. This is not a study involving human interpretation or subjective assessment of data that would require an adjudication method. The device's performance is determined by quantitative measurements and statistical analysis against predefined criteria.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. This device is an IVD quality control and calibration material, not an AI-powered diagnostic tool for human readers. No MRMC study was conducted or is relevant for this product.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, the studies are inherently standalone as they evaluate the performance of the control and calibration materials on the IDS-iSYS Multi-Discipline Automated System. The system/assay performs the measurements without human intervention in the interpretation of the control/verifier results themselves beyond setting up the experiment and analyzing statistical compliance. The performance reported (e.g., mean concentration, CV, percent recovery) is that of the material as measured by the automated system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The ground truth used is based on:
- Gravimetric preparation: For the initial stock solutions and preparation of control/calibration verifier samples.
- Immunologic analysis: Confirmation of concentrations using the IDS-iSYS 17-OH Progesterone assay itself.
- Statistical averaging: The mean of multiple runs on multiple instruments to establish the assigned target values, treated as the reference ground truth for quality control and calibration verification.
- Reference material comparison: For stability studies, comparison to a reference control material stored under optimal conditions (-20°C).
This is a form of analytical ground truth derived from established quantitative laboratory methods and statistical analysis rather than clinical consensus or pathological diagnosis.
8. The sample size for the training set:
Not applicable. This is not an AI/machine learning device that requires a training set. The values and stability characteristics are determined through direct analytical testing, not model training.
9. How the ground truth for the training set was established:
Not applicable. As there is no training set for an AI/machine learning model, no ground truth needed to be established for it.
Ask a specific question about this device
(266 days)
JJX
The Quantimetrix Dropper hsCRP High Sensivity CRP Control is intended for the quality control of laboratory testing procedures of high sensitivity C-Reactive protein (CRP). It is intended for professional in vitro diagnostic use only.
The Dropper® hsCRP Control is a ready-to-use liquid control that does not require reconstitution nor frozen storage. The product is supplied in three clinically significant levels filled to 1mL in convenient plastic dropper bottles. The controls are formulated in a human serum derived matrix fortified with preservatives and stabilizers to maintain product integrity and inhibit microbial growth. Each control level is formulated with human CRP antigen to clinically significant targets that are ideal to monitor high sensitivity CRP test methods.
This document describes the Quantimetrix Dropper® hsCRP High Sensitivity CRP Control (K152117), a quality control material.
Since this is a quality control material and not a diagnostic device intended to diagnose a condition, the acceptance criteria and study details are focused on the stability and performance of the control material itself, rather than diagnostic accuracy metrics like sensitivity, specificity, or AUC, which are common for algorithms that interpret medical images or data.
1. Table of Acceptance Criteria and Reported Device Performance
Device: Quantimetrix Dropper® hsCRP High Sensitivity CRP Control
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Open Vial Stability (Refrigerated) | Control material remains stable for the specified duration at 2-8°C. | 90 days at 2 to 8°C |
Open Vial Stability (Room Temp) | Control material remains stable for the specified duration at room temperature (18-25°C). | 30 days at room temp. (18-25°C) |
Shelf-life Stability (Unopened) | Control material remains stable for the specified duration at 2-8°C. | 36 months at 2 to 8°C |
Value Assignment Ranges | Established from interlaboratory and intralaboratory data, calculating a mean value and applying a +/- 3SD range. | Values are established, and individual laboratory means should fall within the ranges listed. Each lab should establish its own ranges. |
Study Proving Acceptance Criteria:
The document states that "Accelerated stability studies were conducted to establish the open and unopened stability claims. Accelerated stability studies were conducted to establish the shelf-life stability claim. Acceptance criteria were met to support the product claims." Additionally, "Real time stability studies are ongoing and performed for every lot."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a distinct "test set" in the context of evaluating diagnostic accuracy. Instead, the "testing" involves stability studies and value assignment.
- Stability Studies: The sample size for stability studies would refer to the number of control vials subjected to various storage conditions and time points. This detail is not explicitly provided in the document.
- Value Assignment: For value assignment, samples of a specific lot are submitted to "multiple laboratories and Quantimetrix laboratory" for testing across "different analyzer platforms." The sample size here relates to the number of vials per lot tested and the number of replicates performed. This specific number is not provided.
- Data Provenance: The data for value assignment comes from "interlaboratory and intralaboratory data using instrument manufacturer's reagents." The country of origin is not explicitly stated, but the submission is to the US FDA, implying testing relevant to the US market. The nature of these studies is prospective in the sense that they are specifically conducted to establish values for each lot and to determine stability.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This concept does not apply directly to this device, as it is a quality control material, not a diagnostic algorithm interpreting data. The "ground truth" for the control material is its assigned value, which is determined through assaying the material using established laboratory methods.
- For Value Assignment: The "experts" involved are the personnel in the "multiple laboratories and Quantimetrix laboratory" who perform the testing. Their specific qualifications are not detailed, but they are expected to be trained laboratory professionals capable of running the hsCRP assays. No specific "number of experts" is given.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This concept typically applies to cases where multiple human readers are interpreting ambiguous medical images or data and need to reach a consensus. For a quality control material where analytical values are being determined, no formal adjudication method like 2+1 or 3+1 is mentioned or expected.
- Value Assignment: The "adjudication" is inherent in the "interlaboratory and intralaboratory data" where a mean value is calculated, and ranges are established (mean +/- 3SD). Outlier data might be identified and excluded statistically, but this is a statistical process, not a consensus among experts in the traditional sense.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done or is applicable. This type of study evaluates the impact of an AI algorithm on human reader performance, typically in diagnostic imaging. The Dropper® hsCRP High Sensitivity CRP Control is a quality control material, not an AI diagnostic algorithm, so this study type is irrelevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study was done or is applicable. This device is a physical control material used in laboratory testing procedures. It is not an algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the value assignment of the control material, the "ground truth" is established by:
- Analytical Measurement: The measured concentrations of hsCRP in the control material, as determined by multiple analyses using established laboratory instruments and reagents (instrument manufacturer's reagents).
- Statistical Analysis: Calculation of a mean value and a standard deviation (specifically, a +/- 3SD range) from the collected analytical data.
For stability studies, the "ground truth" is that the assayed values of the control material remain within a predefined acceptable range (e.g., within the established mean +/- 3SD) over the tested duration under specified storage conditions.
8. The sample size for the training set
This concept is not applicable as this is not an AI/ML device that requires a training set.
9. How the ground truth for the training set was established
This concept is not applicable as this is not an AI/ML device that requires a training set.
Ask a specific question about this device
(86 days)
JJX
CONTOUR®NEXT control solutions are aqueous glucose solutions intended for use in self-testing by people with diabetes as a quality control check to ensure that the Contour Next Blood Glucose Monitoring Systems are working properly.
Contour Next Control Solution is used as a quality control check to assure the customer that their Bayer Contour Next blood glucose monitoring system is reading accurately. The control solution has a very controlled amount of glucose in it. The bottles of Contour Next test strips have a range of acceptable values on every bottle that is used to compare the result obtained when the control solution is applied to the test strip. When the reading from the control solution is within the range on the bottle, the system has been quality control checked and shown to be accurate. If the reading from the control solution is outside the stated range, then the customer is instructed by the user guide to not use the system until trouble-shooting can be done and/or customer service is called for help.
The Solution is a prescribed amount of glucose in water and includes a buffer, red dye, and a thickening agent. The control solution comes in a 2.5mL plastic bottle with an applicator tip. It comes in two glucose levels, Level 1 (about 45mg/dL) and Level 2 (about 125mg/dL).
The provided text is a 510(k) Summary for the Bayer Contour Next Control Solution. It describes the device, its intended use, and compares it to a predicate device. However, the document does not contain the specific details required to answer all parts of your request about acceptance criteria and a study proving those criteria are met, especially concerning numerical performance data, sample sizes, expert involvement, and ground truth methodologies.
This document is a regulatory submission demonstrating substantial equivalence, not a detailed scientific study report with raw performance data. It summarizes testing done but does not provide the granular information you're asking for.
Here's an attempt to answer based on the available information, with significant limitations and explicit mentions of missing data:
Acceptance Criteria and Device Performance
The document states that "Bench testing conducted showed that the modified Contour Next Control Solution performed as intended and met the system specifications." However, specific numerical acceptance criteria for performance (e.g., accuracy ranges for control solutions) and the detailed results demonstrating those are not provided in this document. It generally claims the device met specifications.
The provided differences table suggests implicit performance targets related to glucose concentration levels. For example, Level 1 is about 45 mg/dL for the predicate and 0.03% (equivalent to 30 mg/dL if 1% is 1000 mg/dL, which is unusual for glucose concentrations in this context, 0.03% is more likely a weight/volume percentage meaning 30 mg/mL, not mg/dL), and Level 2 about 125 mg/dL for the predicate and 0.07% for the modified. The exact equivalence or acceptance criteria for these specific concentrations are not detailed.
Given the nature of a control solution, the primary performance acceptance criteria would usually revolve around:
- Stability: Maintaining glucose concentration within specified ranges over shelf-life and use-life.
- Accuracy: When tested with the blood glucose meter, the control solution results fall within the expected range printed on the test strip vial.
- Usability: End-users can successfully perform a control test.
The document claims these were met, but without numerical specifics.
Study Details
Due to the limited information in the 510(k) Summary, many of your questions cannot be fully answered.
-
Table of Acceptance Criteria and Reported Device Performance:
Feature / Test Category Acceptance Criteria Reported Device Performance Bench Testing "Met system specifications" (details not provided) "Performed as intended and met the system specifications" Stability Testing Preservation of "shelf-life or use-life claims" (specific parameters not provided) "No impact to the claimed shelf-life or use-life" Usability Testing Ability of end-users to "perform a control test" (specific metrics not provided) "No impact in the ability of end users to perform control tests" Glucose Concentration (Level 1) Implicit: New nominal concentration of 0.03% Modified: 0.03% Glucose Concentration (Level 2) Implicit: New nominal concentration of 0.07% Modified: 0.07% Impact of Surfactant Implicit: No negative impact on performance Surfactant added, assumed to not negatively impact performance, as overall conclusion is substantial equivalence. Control Test Temperature Range Implicit: Functional within 15C-35C Modified: 15C-35C Note: The "acceptance criteria" column is largely inferred from the "reported device performance" due to the lack of explicit, detailed criteria in the document. The exact numerical ranges that define "met system specifications" or "no impact" are not provided.
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not specified. The document does not mention the number of units tested for bench, stability, or usability testing.
- Data Provenance: Not specified. It's a Bayer Healthcare LLC internal study, but the geographical origin of the data (e.g., country of testing, participants in usability study) is not mentioned. It is an internal, retrospective summary of performance testing for a regulatory submission.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Not applicable/Not specified. This device is a control solution for a blood glucose monitor. Performance testing would likely involve laboratory equipment, chemical analysis, and potentially human factors testing, but not typically "experts establishing ground truth" in the sense of clinical interpretation (e.g., radiologists reviewing images). The accuracy of glucose concentration is determined by analytical methods.
-
Adjudication Method for the Test Set:
- Not applicable/Not specified. Adjudication is typically for resolving discrepancies in expert interpretation. Given the nature of performance testing for a control solution, this wouldn't be relevant.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No. An MRMC study is relevant for diagnostic devices requiring human interpretation (e.g., imaging devices). This is a quality control material.
-
Standalone Performance:
- Yes, to an extent. The "bench testing" and "stability testing" sections refer to the performance of the control solution itself, independent of a human operator, in terms of its chemical properties and stability. "Usability testing" does involve human interaction, but its goal is to ensure the control solution's design doesn't hinder the user, not to assess human interpretation of outputs.
-
Type of Ground Truth Used:
- For glucose concentration and stability: Likely analytical measurements against certified standards or reference methods. The "ground truth" for a control solution's glucose level is its actual chemical concentration determined by precise laboratory techniques.
- For usability: Ground truth for successful control testing would be the successful completion of the test and obtaining a result within the expected range, often observed or reported by test participants.
-
Sample Size for the Training Set:
- Not applicable. This device is a chemical control solution, not an AI/machine learning algorithm that requires a "training set."
-
How the Ground Truth for the Training Set Was Established:
- Not applicable. (See point 8).
Ask a specific question about this device
(205 days)
JJX
The ADVIA Centaur® Follicle Stimulating Hormone (FSH) Master Curve Material is for in vitro diagnostic use in the verification of calibration and reportable range of the ADVIA Centaur FSH assay.
The ADVIA Centaur® Free thyroxine (FT4) Master Curve Material is for in vitro diagnostic use in the verification of calibration and reportable range of the ADVIA Centaur FT4 assay.
The ADVIA Centaur® Thyroxine (T4) Master Curve Material is for in vitro diagnostic use in the verification of calibration and reportable range of the ADVIA Centaur T4 assay.
The ADVIA Centaur® Triiodothyronine (T3) Master Curve Material is for in vitro diagnostic use in the verification of calibration and reportable range of the ADVIA Centaur T3 assay.
ADVIA Centaur® FSH Master Curve Material is an in vitro diagnostic product containing various levels of follicle stimulating hormone spiked in lyophilized equine serum with sodium azide (0.1% after reconstitution) and preservatives. Each set contains eight levels (MCM1–8); with a reconstituted volume of 1.0 mL/vial per level. MCM1contains no analyte. The MCMs assigned values are lot-specific of target values: 0.00, 1.50, 4.50, 12.0, 30.0, 62.5, 130, 225 mIU/mL.
ADVIA Centaur® FT4 Master Curve Material is an in vitro diagnostic product containing various levels of thyroxine in human plasma with sodium azide. Each set contains seven levels; with a reconstituted volume of 1.0 mL/vial per level. MCM1 contains no analyte. The FT4 MCMs assigned values are lot-specific of target values: 0.00, 2.00, 5.00, 10.0, 15.0, 22.0, and 35.0 µg/dL which corresponds to FT4 values of 0.00, 0.42, 0.80, 1.70, 3.0, 5.6, and 13.5 ng/dL.
ADVIA Centaur® T4 Master Curve Material is an in vitro diagnostic product containing various levels of levothyroxine in human plasma with sodium azide and preservatives. Each set contains six levels; with a reconstituted volume of 1.0 mL/vial per level. MCM1 contains no analyte. The T4 MCMs assigned values are lot-specific of target values: 0.00, 2.50, 5.00, 10.0, 15.0. and 35.0 µg/dL.
ADVIA Centaur® T3 Master Curve Material is an in vitro diagnostic product containing various levels of liothyronine in human plasma with sodium azide and preservatives. Each se contains seven levels; with a reconstituted volume of 1.0 mL/vial per level. MCM1 contains no analyte. The T3 MCMs assigned values are lot-specific of target values: 0.00, 0.42, 0.69, 1.11, 1.65, 3.87, and 7.00 ng/mL.
The provided document describes the Siemens Healthcare Diagnostics ADVIA Centaur® Master Curve Materials (MCM) for Follicle Stimulating Hormone (FSH), Free Thyroxine (FT4), Thyroxine (T4), and Triiodothyronine (T3). These devices are quality control materials used for verification of calibration and reportable ranges of their respective ADVIA Centaur assays. The information is presented as a 510(k) Summary, which focuses on demonstrating substantial equivalence to predicate devices, rather than a detailed report of a clinical study for device performance against specific patient outcomes or diagnostic accuracy.
Here's a breakdown of the requested information based on the provided text, with an emphasis on its applicability to in vitro diagnostic quality control materials rather than typical diagnostic imaging AI studies, which often involve human readers and expert consensus.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for these devices are primarily related to stability (shelf-life, open vial, and on-board stability) and value assignment. The reported performance indicates that these criteria were met.
ADVIA Centaur® FSH Master Curve Material
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Stability | ||
Real Time/Shelf Life (Unopened) | The dose recovery for MCM1 (analyte-free) and the % dose recoveries for MCM2-8 met the sponsor's required acceptance criteria. | Current testing meets acceptance criteria up to the 7 months' time point, supporting a 6-month shelf-life claim (storage unopened at 2-8°C). Real-time studies are ongoing. |
In-Use Open Vial (Reconstituted) | The dose recovery for MCM1 (analyte-free) and the % dose recoveries for MCM2-8 met the sponsor's required acceptance criteria. | Acceptance criteria for open vial (reconstituted) stability study were met up to the 29 days' time point, supporting an open vial claim of 28 days (when properly stored at 2-8°C). |
On-Board Stability | The dose recovery for MCM1 (analyte-free) and the % dose recoveries for MCM2-8 met the sponsor's required acceptance criteria. | On-board stability study met acceptance criteria at the 5 hours' time point, supporting an on-board stability claim for 4 hours. |
Value Assignment | The new MCM doses must fall within the final value assignment specification for FSH MCMs. The mean MCM doses of the new MCM lot manufactured must fall within the customer range specifications. MCM1 must measure at or below the FSH assay sensitivity limit. | The document states MCMs are value assigned using assigned reference calibrators and MCMs, and that MCMs are manufactured using qualified materials and measurement procedures. Performance verification runs (6 replicates per level) are conducted to ensure mean MCM doses fall within customer range specifications. No specific values or a pass/fail statement for a particular lot is provided, but implies acceptance criteria were met as part of the overall substantial equivalence. |
Expected Values | Lot-specific assigned values and customer ranges are established per sponsor's internal procedural specifications. | Example target values (mIU/mL): MCM1: 0.00, MCM2: 1.50, MCM3: 4.50, MCM4: 12.0, MCM5: 30.0, MCM6: 62.5, MCM7: 130, MCM8: 225. Assay Range: 0.3–200 mIU/mL. |
Traceability | Standardized against WHO 2nd International Standard for human FSH (IS 94/632). | ADVIA Centaur FSH = 0.91 (WHO) - 0.18 mIU/mL (r = 0.999). Assigned values for calibrators/MCMs are traceable to this standardization. |
ADVIA Centaur® FT4 Master Curve Material
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Stability | ||
Real Time/Shelf Life (Unopened) | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met up to the 7 months' time point, supporting a 6-month shelf-life claim (storage unopened at 2-8°C). Real-time studies are ongoing. |
In-Use Open Vial (Reconstituted) | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met, supporting an open vial claim of 14 days (when properly stored at 2–8°C). |
On-Board Stability | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met up to 5 hours, supporting an on-board stability claim for 4 hours. |
Value Assignment | The new MCM doses must fall within the final value assignment specification for FT4 MCMs. The mean MCM doses must fall within customer range specifications. MCM1 must measure at or below the FT4 assay sensitivity limit. | MCMs are value assigned using assigned reference calibrators and MCMs. Performance verification run (6 replicates per level) ensures mean MCM doses fall within customer range specifications. A nested testing run protocol is used for MCM2-7 value assignment (20 replicates in total). Implies acceptance criteria were met. |
Expected Values | Lot-specific assigned values and customer ranges are established per sponsor's internal procedural specifications. | Example target values (ng/dL): MCM1: 0.00, MCM2: 0.42, MCM3: 0.80, MCM4: 1.70, MCM5: 3.0, MCM6: 5.6, MCM7: 13.5. Assay Range: 0.1-12.0 ng/dL. |
Traceability | Standardized to an internal standard manufactured using USP (United States Pharmacopeia) material. | Assigned values for calibrators and MCMs are traceable to this standardization. |
ADVIA Centaur® T4 Master Curve Material
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Stability | ||
Real Time/Shelf Life (Unopened) | The dose recovery for MCM1 and the % dose recoveries for MCM2-6 met the sponsor's required acceptance criteria. | Acceptance criteria met up to the 11 months' time point, supporting a 10-month shelf-life claim (storage unopened at 2-8°C). Real-time studies are ongoing. (No information on open vial or on-board stability for T4 MCM in this summary) |
Value Assignment | The new MCM doses must fall within the final value assignment specification for T4 MCMs. The mean MCM doses must fall within customer range specifications. MCM1 must measure at or below the T4 assay sensitivity limit. | MCMs are value assigned using assigned reference calibrators and MCMs. Performance verification run (6 replicates per level) ensures mean MCM doses fall within customer range specifications. A nested testing run protocol is used for MCM2-6 value assignment (20 replicates in total). MCM6 is diluted 1:4 before testing to meet reportable range. Implies acceptance criteria were met. |
Expected Values | Lot-specific assigned values and customer ranges are established per sponsor's internal procedural specifications. | Example target values (µg/dL): MCM1: 0.00, MCM2: 2.50, MCM3: 5.00, MCM4: 10.0, MCM5: 15.0, MCM6: 35.0. Assay Range: 0.3–30.0 µg/dL. |
Traceability | Standardized to an internal standard manufactured using USP (United States Pharmacopeia) material. | Assigned values for calibrators and MCMs are traceable to this standardization. |
ADVIA Centaur® T3 Master Curve Material
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Stability | ||
Real Time/Shelf Life (Unopened) | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met up to the 11 months' time point, supporting a 10-month shelf-life claim (storage unopened at 2-8°C). Real-time studies are ongoing. |
In-Use Open Vial (Reconstituted) | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met up to the 22 days' time point, supporting an open vial claim of 21 days (when properly stored at 2-8°C). |
On-Board Stability | The dose recovery for MCM1 and the % dose recoveries for MCM2-7 met the sponsor's required acceptance criteria. | Acceptance criteria met up to 5 hours, supporting an on-board stability claim for 4 hours. |
Value Assignment | The new MCM doses must fall within the final value assignment specification for T3 MCMs. The mean MCM doses must fall within customer range specifications. MCM1 must measure at or below the T3 assay sensitivity limit. | MCMs are value assigned using assigned reference calibrators and MCMs. Performance verification run (6 replicates per level) ensures mean MCM doses fall within customer range specifications. A nested testing run protocol is used for MCM2-7 value assignment (20 replicates in total). Implies acceptance criteria were met. |
Expected Values | Lot-specific assigned values and customer ranges are established per sponsor's internal procedural specifications. | Example target values (ng/mL): MCM1: 0.00, MCM2: 0.42, MCM3: 0.69, MCM4: 1.11, MCM5: 1.65, MCM6: 3.87, MCM7: 7.00. Assay Range: 0.1-8 ng/mL. |
Traceability | Standardized to an internal standard manufactured using USP (United States Pharmacopeia) material. | Assigned values for calibrators and MCMs are traceable to this standardization. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes several non-clinical performance tests, primarily stability studies and value assignment. These are prospective tests conducted by the sponsor (Siemens Healthcare Diagnostics Inc.).
- Stability Studies (Test Set):
- Real Time/Shelf Life (Unopened): Each MCM (FSH, FT4, T4, T3) was stored unopened at 2-8°C and tested at T=0, 7 months, and 10 or 11 months. Ongoing studies were mentioned at further time points (e.g., 14, 18, 24, 25 months for FSH; 15, 19, 24, 30, 31 months for FT4, T4, T3). The comparison was made against -80°C reference MCMs. No specific number of replicate MCM vials tested at each time point is explicitly stated, other than 'test FSH MCMs were stored'.
- In-Use Open Vial (Reconstituted): Each MCM (FSH, FT4, T3) was reconstituted, pooled, aliquoted, and stored at 2-8°C. They were tested in 5 replicates per level at T=0, 7, 14, 21, 28, and 29 days (for FSH, T3) or T=0, 7, 14, 21, 28, and 29 days (for FT4). T4 MCM summary did not include open vial stability data details.
- On-Board Stability: Pooled aliquots of each MCM (FSH, FT4, T3) in sample cups were stored on the ADVIA Centaur system and measured at T=0, 2, 4, and 5 hours. T4 MCM summary did not include on-board stability data details.
- Value Assignment (Test Set):
- For each new MCM lot, MCM1 (analyte-free) was run in 5 replicates on two separate runs.
- Other MCM levels (MCM2-8 for FSH, MCM2-7 for FT4/T3, MCM2-6 for T4) were tested using a nested testing run protocol with alternating samples of reference and new MCM, totaling 20 replicates.
- A performance verification run consisted of 6 replicates of each MCM level.
- Data Provenance: The studies were conducted by Siemens Healthcare Diagnostics Inc., likely in the USA. The data is prospective, generated specifically for this 510(k) submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This type of in vitro diagnostic quality control material does not rely on "expert" interpretation in the same way as, for example, an imaging device. The "ground truth" for these materials is established through a process of:
- Traceability: Standardization to internationally recognized reference materials (e.g., WHO 2nd International Standard for human FSH) or internal standards traceable to official pharmacopeia (USP T4, T3 stock).
- Assigned Values: The values of the MCMs are assigned by the manufacturer based on these traceable standards and validated measurement procedures.
- Internal Protocols: The "sponsor's internal procedural specifications" and "qualified materials and measurement procedures" form the basis for establishing the expected values and ranges.
Therefore, there are no "experts" (like radiologists) establishing ground truth in terms of diagnostic interpretation from patient data. The ground truth is analytical and based on metrological traceability and rigorous laboratory procedures.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Adjudication methods like 2+1 or 3+1 (where multiple experts independently assess and then resolve discrepancies) are not applicable here. These methods are used in scenarios involving subjective expert interpretation, often for imaging or clinical diagnosis. For these quality control materials, "ground truth" and performance are determined through quantitative analytical measurements against established reference values and statistical methods to ensure measurements fall within specified limits.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
MRMC studies are typically performed for diagnostic imaging devices where human readers (e.g., radiologists) interpret cases, sometimes with AI assistance. This document describes in vitro diagnostic quality control materials, not a diagnostic AI device that involves human interpretation of cases. Therefore, no MRMC comparative effectiveness study was performed, and thus no effect size for human reader improvement with AI assistance is mentioned.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the studies described are standalone in the sense that they evaluate the performance of the quality control materials themselves (stability, value assignment) within the ADVIA Centaur assay system, without direct human cognitive input being part of the 'device's' analytical function during testing. The "algorithm" here is the underlying immunoassay technology, and the MCM's performance is tested analytically.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used for these quality control materials is analytical traceability to recognized international standards (WHO) or national pharmacopeia (USP) for the respective analytes. It also includes reliance on the sponsor's "qualified materials and measurement procedures" and internal procedural specifications for value assignment. This is an objective, quantitative ground truth, not derived from expert consensus, pathology, or outcomes data.
8. The sample size for the training set
The concept of a "training set" is relevant for machine learning algorithms. These devices are chemical/biological reagents designed as quality control materials for immunoassays, not AI algorithms. Therefore, there is no "training set" as understood in machine learning. The manufacturing processes and associated testing are quality control steps, not a machine learning training phase.
9. How the ground truth for the training set was established
As there is no "training set" in the machine learning sense for these devices, this question is not applicable. The establishment of ground truth for the performance evaluation (test set, as discussed in point 7) is through analytical traceability to standards.
Ask a specific question about this device
Page 1 of 34