Search Results
Found 7 results
510(k) Data Aggregation
(237 days)
GLQ
CELL-DYN 22 Plus Control is an assayed hematology control for evaluating the accuracy and precision of the CELL-DYN Emerald 22 system.
Assayed parameters include:
WBC (108/L), RBC (1013L), HGB (g/dL), HCT (%), MCV (fL), MCH (pg), MCHC (g/dL), RDW (%), PLT (10°/L), MPV (iL), NEU (%), NEU (10°/L), LYM (%), LYM (10°/L), MON (10°/L), EOS (%),
EOS (10°/L), BAS (%), BAS (10°/L)
CELL-DYN 22 Plus Control is an in-vitro diagnostic product that contains stabilized human red blood cells, human, mammalian or simulated white blood cells and a platelet component in a preservative medium. The product is packaged in polypropylene plastic vials containing 2.5ml. The closures are polypropylene screw caps with polyethylene liners. There are three different levels (low, normal and high). The vials will be packaged in a six (6) or twelve (12) welled vacuum formed "clam-shell" container with the package insert / assay sheet. The product must be stored at 2 - 10° C.
Here's a breakdown of the acceptance criteria and study information for the CELL-DYN 22 Plus Control, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document implicitly defines "acceptance criteria" through the types of studies conducted and the conclusions reached. It doesn't present a table with specific numerical thresholds for acceptance for each parameter (e.g., "WBC reproducibility must be within X% CV"). Instead, it states the overall goals and then concludes that the device met these goals.
Here's an interpretation of the implied acceptance criteria and the reported performance:
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Consistently reproducible performance | "consistently reproducible" |
Substantially equivalent to predicate device (Para 12 Plus) | "substantially equivalent to the predicate product" |
Stable for the claimed shelf life (closed vial) | "stable for the entire product dating" |
Stable for the claimed shelf life (open vial) | "stable for the entire product dating" |
Fulfills intended use for accuracy and precision on CELL-DYN Emerald 22 system | "fulfills its intended use when used as instructed" |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not explicitly state the total number of samples or runs for each specific test (e.g., how many vials for closed-vial stability, how many individual tests for reproducibility).
- 10-Run Reproducibility: This indicates that at least 10 runs were performed for this specific test.
- External Site Recovery: No specific sample size is provided for this.
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. Given the nature of a medical device submission, it's highly likely the studies were prospective, conducted specifically for this submission.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of those Experts
This information is not applicable to this type of device (hematology control). Hematology controls are assayed products, meaning their expected values are pre-determined through precise laboratory measurements and statistical analysis, not through expert consensus like image interpretation. There were no "experts" establishing a ground truth in the sense of clinical interpretation.
4. Adjudication Method for the Test Set
This is not applicable. Adjudication methods like '2+1' or '3+1' are used when there's subjective interpretation involved, typically in clinical studies or image analysis where multiple readers evaluate cases and discrepancies need to be resolved. For a hematology control, performance is measured against pre-established assayed values or against the performance of a predicate device, not by expert adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This type of study is relevant for diagnostic devices that involve human interpretation (e.g., radiologists reading images) and assessing how AI assistance impacts their performance. The CELL-DYN 22 Plus Control is a quality control material for an automated hematology analyzer, not a diagnostic tool requiring human interpretation.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
This question is not directly applicable in the context of this device. The CELL-DYN 22 Plus Control is a control material that is run on an automated analyzer. Its "performance" is how reliably it produces expected results on that analyzer. The device itself does not have an "algorithm" in the sense of an AI or diagnostic tool. The performance of the control is evaluated by measuring its values (e.g., WBC, RBC) on the CELL-DYN Emerald 22 system, which is an automated process. The "standalone" performance here would refer to the consistency and accuracy of the control material in isolation, which is what the stability and reproducibility studies aimed to demonstrate.
7. The Type of Ground Truth Used
The "ground truth" for the CELL-DYN 22 Plus Control is established by its assayed values. These values are determined through rigorous laboratory testing on reference instruments or using validated methods, and then likely subjected to statistical analysis to establish expected ranges for each parameter (e.g., WBC, HGB, PLT) for the low, normal, and high control levels. The performance of the control is then assessed by comparing the results obtained on the target instrument (CELL-DYN Emerald 22) to these predetermined assayed values and predicate product performance.
8. The Sample Size for the Training Set
This product does not have a "training set" in the context of machine learning or AI algorithms. It is a physical control material. Its "development" phase would involve formulating the material and then performing extensive testing (as described in the studies) to validate its stability, reproducibility, and assayed values.
9. How the Ground Truth for the Training Set Was Established
As there is no "training set" for this product, this question is not applicable. The ground truth (assayed values) for the control material itself is established through internal laboratory characterization and validation studies, often involving multiple runs on reference instruments and statistical analysis to determine the target ranges for each parameter.
Ask a specific question about this device
(72 days)
GLQ
Cell-Chex™ with CPPD Crystals is an assayed control intended for monitoring total cell counts performed manually using a hemocytometer to validate quantitation of red and white blood cells in patient cerebrospinal fluid and body fluid samples including pleural, pericardial, peritoneal and synovial fluid. Level 1 contains calcium pyrophosphate dihydrate (CPPD) Crystals which can be used to monitor the presence of crystals in synovial fluid.
Cell-Chex™ with CPPD Crystals is also intended for monitoring white blood cell differentiation (Mononuclear, Polymorphonuclear; Neutrophils, Eosinophils, Basophils, Lymphocytes and Monocytes) in body fluid samples performed using Cytospin® smears.
Cell-Chex™ with CPPD Crystals is a stabilized suspension of human red blood cells, human white blood cells and calcium pyrophosphate dihydrate (CPPD) Crystals (Level 1 only) in a preservative medium. The product is packaged in glass vials containing 2.0ml. The closures are polypropylene screw caps with polvethylene liners. There are two different levels. Level 1 contains a low cell count and CPPD crystals, and Level 2 contains a high cell count and no crystals. The vials will be packaged in a six (6) or twelve (12) welled vacuum formed clamshell container with the package insert / assay sheet. The product must be stored at 2 - 10°C.
The provided text describes the Cell-Chex™ with CPPD Crystals device, its intended use, and a comparison to a predicate device (Cell-Chex™). It also mentions the types of studies conducted to establish its performance. However, it does not explicitly state "acceptance criteria" in a quantitative format or report specific numerical performance metrics against those criteria. Instead, the studies aim to demonstrate reproducibility, substantial equivalence to the predicate, and stability.
Here's an analysis based on the provided text, addressing your points where possible:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not specify quantitative acceptance criteria. Instead, it frames the success of the studies in terms of "consistently reproducible," "substantially equivalent to the predicate product," and "stable for the shelf life claimed."
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Device is consistently reproducible. | "All testing showed that Cell-Chex™ with CPPD crystals is consistently reproducible..." |
Device is substantially equivalent to the predicate product. | "All testing showed that Cell-Chex™ with CPPD crystals is ... substantially equivalent to the predicate product..." and "Results presented show it is consistently reproducible and performs compably to the predicate product." The only difference is the type of crystal (CPPD vs. monosodium urate). |
Device is stable for the claimed shelf life (open and closed). | "All testing showed that Cell-Chex™ with CPPD crystals is ... stable for the shelf life claimed." |
Device fulfills its intended use. | "Cell-Chex™ with CPPD Crystals fulfills its intended use as a control mixture for manual counting of Red Blood Cells and White Blood Cells in body fluids..." |
Device is safe and effective. | "Cell-Chex™ with CPPD Crystals is a safe and effective product when used as indicated in the instructions for use." |
2. Sample Size Used for the Test Set and the Data Provenance
The document does not explicitly state the sample sizes used for the "Closed Vial Stability," "Open Vial Stability," "Run-to-Run Reproducibility," and "Precision Performance" studies. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not describe the use of experts or the establishment of ground truth in the context of clinical or diagnostic performance for the test set. The device is a "Hematology Quality Control Mixture," meaning its "ground truth" (i.e., its expected cell counts and crystal presence) is internally derived from its manufacturing specifications and assessed through analytical studies, rather than by expert interpretation of patient samples.
4. Adjudication Method for the Test Set
Not applicable. The "ground truth" for this control device is based on its composition and manufacturing specifications, not on expert adjudication of diagnostic images or clinical cases.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
Not applicable. This device is a hematology quality control mixture, not an AI or imaging device requiring human reader interpretation or assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Not applicable. This device is a laboratory control, not an algorithm.
7. The Type of Ground Truth Used
The ground truth for this device is its manufactured composition and stability characteristics. For example, Level 1 contains CPPD crystals, and its expected cell counts (low count) are inherent to its design as a control. The studies focused on confirming that these inherent properties are maintained over time (stability) and are consistently met across manufacturing runs (reproducibility and precision).
8. The Sample Size for the Training Set
Not applicable. This is a physical control product, not an AI/machine learning model that requires a training set. The studies described are analytical validation studies.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no training set for an AI/machine learning model. The "ground truth" for the control material itself is established through its formulation, manufacturing process, and characterization by the manufacturer.
Ask a specific question about this device
(103 days)
GLQ
Cell-Chex is an assayed control intended for monitoring total cell counts performed manually using a hemocytometer to validate quantitation of red and white blood cells in patient cerebrospinal fluid and body fluid samples including pleural, pericardial, peritoneal and synovial fluid. Level 1 contains monosodium urate crystals which can be used to monitor the presence of crystals in synovial fluid.
Cell-Chex is also intended for monitoring white blood cell differentiation (Mononuclear, Polymorphonuclear: Neutrophils, Eosinophils, Basophils, Lymphocytes and Monocytes) in body fluid samples performed using Cytospin® smears.
Cell-Chex is a stabilized suspension of human red blood cells, human white blood cells and monosodium urate crystals (Level 1 only) in a preservative medium. The product is packaged in glass vials containing 2.0ml. The closures are polypropylene screw caps with polyethylene liners. There are two different levels. Level 1 contains a low cell count and monosodium urate crystals, and Level 2 contains a high cell count and no crystals. The vials will be packaged in a six (6) or twelve (12) welled vacuum formed "clam-shell" container with the package insert / assay sheet. The product must be stored at 2 - 10° C.
The provided text describes the 510(k) submission for "Cell-Chex," a hematology control for body fluids. The device is intended for monitoring total cell counts and white blood cell differentiation using manual methods (hemocytometer and Cytospin® smears). The key change from the predicate device is the addition of monosodium urate crystals in Level 1 for monitoring crystal presence in synovial fluid.
Here's an analysis based on the provided text, addressing your specific questions:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria for various performance metrics. Instead, it describes the types of studies conducted and their general conclusions. The "reported device performance" is qualitative, indicating the device meets its intended use and is comparable to the predicate.
Acceptance Criterion (Implicit from Study Types) | Reported Device Performance |
---|---|
Closed Vial Stability | Cell-Chex is stable for the shelf life claimed (60 days). |
Open Vial Stability | Cell-Chex is stable for the open vial stability claimed (30 days). |
Run-to-Run Reproducibility | Cell-Chex is consistently reproducible. |
Precision Performance Data | Cell-Chex performs comparably to the predicate product. |
Crystal Detection (Level 1 only) | Can be used to monitor the presence of crystals in synovial fluid (listed as positive or negative). |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size for the test sets used in the Closed Vial Stability, Open Vial Stability, Run-to-Run Reproducibility, or Precision Performance Data studies.
The data provenance is not explicitly stated as retrospective or prospective, nor does it mention the country of origin of the data. Given the nature of a control device, the "data" would likely originate from internal testing conducted by the manufacturer (Streck).
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not mention the use of experts to establish ground truth for a test set in the traditional sense of clinical readings. This device is a control product, meaning its performance is evaluated against expected values for cell counts and crystal presence, not against expert interpretations of patient samples.
4. Adjudication Method for the Test Set
Since there were no human readers or expert interpretations of a test set of patient cases, an adjudication method like 2+1 or 3+1 is not applicable and therefore, not mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was mentioned or performed. This type of study is typically conducted for diagnostic devices where human interpretation directly impacts diagnosis and patient outcomes, and the AI is meant to assist human readers. Cell-Chex is a control used to validate the manual counting procedures, not a diagnostic AI system itself.
6. Standalone (Algorithm Only) Performance Study
No standalone performance study for an algorithm was done. Cell-Chex is a physical control material and does not involve an algorithm with standalone performance. Its purpose is to provide known values for manual counting methods.
7. Type of Ground Truth Used
The "ground truth" for Cell-Chex's performance is established by its known, pre-defined characteristics as a control material. This includes:
- Expected cell counts (red blood cells, white blood cells).
- Expected white blood cell differentiation percentages.
- Presence or absence of monosodium urate crystals (for Level 1).
These values are determined during the manufacturing and characterization of the control product itself, rather than by expert consensus on external patient data or pathology. The studies (stability, reproducibility, precision) then verify that the product consistently maintains these known characteristics over time and across different manufacturing lots.
8. Sample Size for the Training Set
There is no mention of a training set sample size. This device does not involve a machine learning algorithm that requires training data.
9. How Ground Truth for the Training Set was Established
As there is no training set for a machine learning algorithm, this question is not applicable. The "ground truth" for the control material's inherent properties is established through the manufacturing process and quality control measures.
Ask a specific question about this device
(56 days)
GLQ
STaK-Chex Plus Retics is an assayed whole blood control for evaluating the accuracy and precision of automated, semi-automated and manual procedures that measure blood cell parameters.
STaK-Chex Plus Retics is a stabilized suspension of human red blood cells, a nucleated red blood cell analog, a white blood cell component consisting of human analogs and a platelet component consisting of a non-human analog in a preservative medium. The product is packaged in plastic vials containing 5ml. The closures are polypropylene screw caps with polyethylene liners. There are three different levels (low, normal and high). The vials will be packaged in a six (6) or twelve (12) welled vacuum formed "clam-shell" container with the package insert / assay sheet. The product must be stored at 2 - 10° C.
The provided document describes the 510(k) submission for STaK-Chex® Plus Retics, an assayed hematology control. It details the device, its intended use, comparison to a predicate device, and the general types of studies conducted. However, it does not provide detailed acceptance criteria or the specific quantitative results of a study designed to prove the device meets those criteria in a format applicable to evaluating a medical diagnostic AI device.
This submission is for a quality control product, not an AI or diagnostic device that would typically have performance metrics like sensitivity, specificity, or AUC against a ground truth. The studies cited are focused on product stability and reproducibility to demonstrate substantial equivalence to a predicate device.
Therefore, many of the requested points cannot be answered based on the provided text, as they pertain to a different type of device evaluation (e.g., AI diagnostic device performance).
Here's an attempt to extract relevant information and note what is not available:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in a table format for parameters like accuracy, precision, sensitivity, or specificity. The evaluation is focused on demonstrating "substantial equivalence" to a predicate device and stability. The "reported device performance" is qualitative, indicating consistent reproducibility and stability.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Consistent Reproducibility (against predicate and itself) | "Consistently reproducible" |
Substantial Equivalence to predicate device (STaK-Chex Plus Retics (K992887)) | "Substantially equivalent to the predicate product" |
Stability for claimed shelf life | "Stable for the shelf life claimed" and "stable for the entire product dating" |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated. The studies mentioned are: Closed Vial Stability, Open Vial Stability, Run to Run Reproducibility, and External Site recovery of values. The number of samples (vials, runs, sites) used for each of these tests is not quantified.
- Data Provenance: Not specified (e.g., country of origin). The studies appear to be internal (from Streck) and possibly external (for recovery of values), but no specific locations are provided.
- Retrospective or Prospective: Not specified. Stability studies are typically prospective over time, but the overall nature of the data collection isn't detailed.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This question is not applicable to this device type. The "ground truth" for a quality control product like STaK-Chex Plus Retics would be the expected values of the blood cell parameters it is designed to measure, established through manufacturing processes and validated assays, rather than expert interpretation of images or patient data. No experts are mentioned in the context of establishing ground truth for the test set.
4. Adjudication Method for the Test Set
Not applicable. Adjudication methods (like 2+1 or 3+1) are typically used for disputes in expert interpretation of diagnostic results, which is not relevant for a quality control product.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If so, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
Not applicable. This device is a hematology quality control mixture, not an AI or diagnostic system that human readers would use or that would assist human interpretation.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Not applicable. This is not an algorithm or AI device.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
The "ground truth" for this product type is its assigned values for blood cell parameters, which are established during manufacturing and validated through internal testing and calibration with reference methods. The document states it is an "assayed whole blood control," meaning its values are predetermined. The studies confirm that new lots or conditions (stability) maintain these expected values.
8. The Sample Size for the Training Set
Not applicable. This is not a machine learning or AI device that requires a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. As above, there is no training set for this type of device.
In summary, the provided 510(k) summary focuses on demonstrating that a new version of a quality control product is substantially equivalent to an existing one by proving its stability and reproducibility. The type of acceptance criteria and study details requested are typically for diagnostic AI systems, which do not apply to this product.
Ask a specific question about this device
(26 days)
GLQ
Ask a specific question about this device
(17 days)
GLQ
XE Check is intended to be used as a control for evaluating complete blood cell count (CBC), white cell five-part differential, and reticulocyte percentage on Sysmex XE – 2100 series hematology instruments. The device will consist of three levels: Abnormal Low (characterized by low CBC, high reticulocyte %), Normal (normal CBC, normal reticulocyte %), and Abnormal High (high CBC, low reticulocyte %).
XE Check is a suspension of stabilized human red blood cells, human white cells, simulated human platelets, and simulated human reticulocytes packaged in glass vials containing 4.6 mL volumes. Closures are injection molded polypropylene screw-top caps. The vials are packaged in polystyrene jars.
The provided 510(k) summary for XE Check, an assayed hematology control, lacks detailed acceptance criteria and a fully described study proving device meets these criteria in the format requested. The document states that "Three studies of XE Check were conducted: l) Lot to Lot Reproducibility and Comparison to Whole Blood; II) Long Term Stability; and III) Open Vial Stability. Study results showed XE Check to be consistently reproducible, substantially equivalent to the predicate products, and stable for the entire product dating." However, it does not provide specific numerical acceptance criteria, reported performance values, or detailed study methodologies.
Therefore, many of the requested fields cannot be directly extracted from the provided text. Below is an attempt to answer the questions based on the available information, with clear indications where information is missing.
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified in the document. The document generally states the device should be "consistently reproducible," "substantially equivalent to the predicate products," and "stable for the entire product dating" for controlling "CBC/Diff/Retic parameters on Sysmex XE - 2100 instruments." | "Study results showed XE Check to be consistently reproducible, substantially equivalent to the predicate products, and stable for the entire product dating." |
Detailed Study Information
-
Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size: Not specified in the provided text.
- Data Provenance: Not specified in the provided text. It is implied to be prospective data generated during the product's development and testing by Streck Laboratories, Inc. in Omaha, Nebraska, USA.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable as this is a hematology control device, not an AI/diagnostic imaging device requiring expert ground truth for interpretation. The ground truth would be the expected values on the Sysmex XE-2100 instruments.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable for this type of device.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This is a quality control material for an automated hematology analyzer, not an AI-assisted diagnostic tool for human readers.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Not applicable. This device is a control material, not an algorithm. Its performance is evaluated in conjunction with the Sysmex XE-2100 instrument.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The document implies the "ground truth" for the performance evaluation would be the expected or reference values obtained from the predicate devices (SF Check, Retic Chex) or established by the Sysmex XE-2100 instrument for whole blood samples. For a control material, the ground truth is often defined by target values assigned during the manufacturing and characterization process, verified against reference methods or instruments. The study "Lot to Lot Reproducibility and Comparison to Whole Blood" suggests comparison to actual human whole blood as a reference.
-
The sample size for the training set
- Not applicable. This is a physical control material, not a machine learning algorithm that requires a training set.
-
How the ground truth for the training set was established
- Not applicable.
Ask a specific question about this device
(87 days)
GLQ
STaK-Chex Plus Retics is intended to be used as a control for complete blood cell count (CBC), white cell five-part differential, and reticulocyte parameters on Beckman/Coulter GenS series hematology instruments.
STaK-Chex Plus Retics is a suspension of stabilized human red blood cells, human white cells, simulated human platelets, and simulated human reticulocytes packaged in glass vials containing 4.5 mL volumes. Closures are injection molded polypropylene screw-top caps. The vials are packaged in polystyrene jars.
The provided text describes a medical device called "STaK-Chex Plus Retics," an assayed hematology control. The 510(k) summary outlines its intended use and provides a brief discussion of test results.
Here's an analysis of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a quantitative format (e.g., specific thresholds for accuracy, precision, sensitivity, specificity). Instead, it describes general findings from its studies.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Reproducibility | Consistently reproducible |
Substantial Equivalence | Substantially equivalent to the predicate product |
Stability | Stable for the entire product dating (Long Term Stability and Open Vial Stability) |
2. Sample size used for the test set and the data provenance
The document does not specify the sample size for any of the studies (I: Run to Run Reproducibility and Comparison to Whole Blood; II: Site to Site Reproducibility; III: Long Term Stability; IV: Open Vial Stability). It also does not mention the country of origin of the data or whether the studies were retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided. The device is a control for hematology instruments, implying that the "ground truth" would likely be derived from established laboratory protocols and instrument readings rather than expert consensus on patient cases.
4. Adjudication method for the test set
This information is not provided. Given the nature of a hematology control device, an adjudication method like 2+1 or 3+1 typically used for subjective image interpretation is not applicable. The "ground truth" for a control device is usually based on its known and verified values when used with calibrated instruments.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC study was not done. This device is a quality control product for laboratory instruments, not an AI-assisted diagnostic tool that involves human readers.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This concept is not applicable to the described device. STaK-Chex Plus Retics is a physical control material used with hematology analyzers, not an algorithm.
7. The type of ground truth used
The ground truth for this type of device would be the expected and verified values of the control material (stabilized human red blood cells, human white cells, simulated platelets, and simulated human reticulocytes) when analyzed by properly calibrated Beckman/Coulter GenS series hematology instruments. This is established through rigorous internal testing and characterization of the control material itself, rather than expert consensus on individual cases, pathology, or outcomes data.
8. The sample size for the training set
This information is not applicable. The term "training set" typically refers to machine learning algorithms, which is not relevant for this device. The studies mentioned (Run to Run Reproducibility, Site to Site Reproducibility, Long Term Stability, Open Vial Stability) are performance validation studies, not training.
9. How the ground truth for the training set was established
This information is not applicable, as there is no "training set" in the context of this device.
Ask a specific question about this device
Page 1 of 1