Search Results
Found 244 results
510(k) Data Aggregation
(26 days)
The iQ200 System is an in-vitro diagnostic device used to automate the complete urinalysis profile, including urine test strip chemistry panel, and microscopic sediment analysis. Optionally, the iQ200 Analyzer can be used as a stand-alone unit, or the results from the iQ200 analyzer can be combined with other urine chemistry results received from an LIS. It produces quantitative or qualitative counts of all formed sediment elements present in urine, including cells, casts, crystals, and organisms. A competent human operator can set criteria for auto-reporting and flagging specimens for review. All instrument analyte image decisions may be reviewed and overridden by a trained technologist.
The iQ200 Series Automated Urine Microscopy system utilizes a specimen sandwiched between lamina layers presented to a microscope with a CCD video camera. This ensures the specimen is precisely within the microscope's focus and field of view. The system automates sample handling and analyte classification for improved data reporting and management. Specimens are aspirated by an autosampler, and individual particle images are isolated in each frame. The Auto-Particle Recognition (APR) software classifies images into 12 categories, and more, with 27 additional sub-classifications available. Particle concentration is determined by the number of images and the analyzed volume. Results are checked against user-defined criteria and sent for operator review or directly uploaded to the LIS. Specimen results can be edited, imported, and exported.
N/A
Ask a specific question about this device
(252 days)
The MicroScan Dried Gram-Negative MIC/Combo Panel is used to determine quantitative and qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative anaerobic gram-negative bacilli. After inoculation, panels are incubated for 16-20 hours at 35°C ± 1°C in a non-CO2 incubator, and read either visually or with MicroScan instrumentation, according to the Package Insert.
This particular submission is for the addition of the antimicrobial cefepime at concentrations of 0.12-64 µg/mL to the test panel. Testing is indicated for Enterobacterales, Pseudomonas aeruginosa and Aeromonas spp., as recognized by the FDA Susceptibility Test Interpretive Criteria (STIC) webpage.
The MicroScan Dried Gram-Negative MIC/Combo Panels with Cefepime (CPE) (0.12-64µg/mL) has demonstrated acceptable performance with the following organisms:
Enterobacterales (Enterobacter spp., Escherichia coli, Klebsiella pneumoniae, Proteus mirabilis, Citrobacter koseri, (formerly Citrobacter diversus), Citrobacter freundii complex (Citrobacter freudnii, Citrobacter werkmanii and Citrobacter youngae), Klebsiella oxytoca, Morganella morganii, Proteus vulgaris, Providencia stuartii, Providencia rettgeri, Serratia marcescens)
Pseudomonas aeruginosa
Aeromonas spp.
MicroScan Dried Gram-Negative MIC/Combo Panels are designed for use in determining quantitative and qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative anaerobic gram-negative bacilli.
The principle of MicroScan panels with antimicrobial susceptibility tests are miniaturizations of the broth dilution susceptibility test that have been diluted in broth and dehydrated. Various antimicrobial agents are diluted in broth to concentrations bridging the range of clinical interest. Panels are rehydrated with water after inoculation with a standardized suspension of the organism. After incubation in a non-CO2 incubator for 16-20 hours, the minimum inhibitory concentration (MIC) for the test organism is read by determining the lowest antimicrobial concentration showing inhibition of growth.
The product is single-use and intended for laboratory professional use.
Device Performance Acceptance Criteria and Study Details for MicroScan Dried Gram-Negative MIC/Combo Panels with Cefepime
Based on the provided FDA 510(k) Clearance Letter, the device in question is the MicroScan Dried Gram-Negative MIC/Combo Panels with Cefepime (CPE) (0.12-64 µg/mL), which is an Antimicrobial Susceptibility Test (AST) System. The study described focuses on demonstrating the substantial equivalence of this new configuration (with Cefepime) to a predicate device.
Given the nature of the device (an AST System), the "acceptance criteria" are typically related to the accuracy of determining Minimum Inhibitory Concentration (MIC) and the resulting categorical agreement (Susceptible, Intermediate, Resistant) compared to a reference method. The "study that proves the device meets the acceptance criteria" refers to the performance evaluation conducted for the 510(k) submission.
1. Table of Acceptance Criteria and Reported Device Performance
For AST systems, the key performance metrics are Essential Agreement (EA) and Categorical Agreement (CA) when compared to a CLSI (Clinical and Laboratory Standards Institute) frozen reference panel. The FDA document "Class II Special Controls Guidance Document: Antimicrobial Susceptibility Test (AST) Systems; Guidance for Industry and FDA", dated August 28, 2009, likely outlines the specific acceptance criteria thresholds for EA and CA. While the exact numerical acceptance criteria are not explicitly stated in the provided text, the performance "demonstrated acceptable performance" implies meeting these pre-defined thresholds.
| Performance Metric | Organism Group (Inoculation/Read Method) | Reported Device Performance (Essential Agreement) | Reported Device Performance (Categorical Agreement) | Acceptance Criteria (Implied / Based on FDA Guidance for AST) |
|---|---|---|---|---|
| Essential Agreement (EA) | Aeromonas spp. (Prompt Inoculation/WalkAway Instrument) | 93.5% | N/A | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Categorical Agreement (CA) | Aeromonas spp. (Prompt Inoculation/WalkAway Instrument) | N/A | 90.3% | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Essential Agreement (EA) | Pseudomonas aeruginosa (Prompt Inoculation/WalkAway Instrument) | 95.7% | N/A | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Categorical Agreement (CA) | Pseudomonas aeruginosa (Prompt Inoculation/WalkAway Instrument) | N/A | 91.4% | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Essential Agreement (EA) | Enterobacterales (Turbidity Method/WalkAway Instrument) | 94.7% | N/A | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Categorical Agreement (CA) | Enterobacterales (Turbidity Method/WalkAway Instrument) | N/A | 96.3% | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Essential Agreement (EA) | Aeromonas spp. (Turbidity Inoculation/autoSCAN-4 and Manual Reads) | 100.0% | N/A | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Essential Agreement of Evaluable Isolates | Aeromonas spp. (Turbidity Inoculation/autoSCAN-4 and Manual Reads) | 100.0% | N/A | N/A (Supplementary metric) |
| Categorical Agreement (CA) | Aeromonas spp. (Turbidity Inoculation/autoSCAN-4 and Manual Reads) | N/A | 87.1% | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Categorical Agreement (CA) | Aeromonas spp. (Turbidity Inoculation/WalkAway Read Method) | N/A | Below 90% | Typically ≥ 90% (Guidance based, not explicitly stated as a number) |
| Inoculum and Instrument Reproducibility | Cefepime (Turbidity/Prompt, autoSCAN-4/WalkAway) | Acceptable Reproducibility and Precision | N/A | (Implied acceptable performance) |
| Quality Control Testing | Cefepime | Acceptable Results | N/A | (Implied acceptable performance) |
Important Note: The document highlights some instances where the performance was "outside of essential agreement" for Enterobacterales with Prompt inoculation and "below 90%" for Aeromonas spp. with turbidity inoculation and WalkAway read method. These discrepancies are "mitigated with a limitation" in the product labeling, suggesting that while initial performance in those specific conditions did not meet implicit criteria, the overall robust performance with other methods/organisms, coupled with labeling limitations, made the device acceptable for clearance.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not explicitly provide a total number for the test set sample size (e.g., number of isolates tested). It refers to "contemporary and stock Efficacy isolates and stock Challenge strains" used for external evaluations.
- Data Provenance: The document does not specify the country of origin of the data. It mentions "external evaluations," which generally implies testing conducted at clinical sites or contract research organizations. The study appears to be retrospective in the sense that it uses "stock Efficacy isolates and stock Challenge strains" which are pre-existing collections of bacterial isolates. It also mentions "contemporary" isolates, suggesting some recent collection. It implies a laboratory-based performance study rather than a clinical trial with patient outcomes.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This type of device (AST System) does not typically rely on human expert interpretation for establishing the "ground truth" of the test set. The ground truth for antimicrobial susceptibility testing is established by a reference method, which for this device is stated as a "CLSI frozen Reference Panel."
Therefore:
- Number of Experts: Not applicable in the context of creating the ground truth for AST.
- Qualifications of Experts: Not applicable.
4. Adjudication Method for the Test Set
As the ground truth is established by a reference method (CLSI frozen Reference Panel), there is no human adjudication method like 2+1 or 3+1 typically used for image-based diagnostics. The device's results are directly compared to the quantitatively or qualitatively determined results from the CLSI reference method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no indication that an MRMC comparative effectiveness study was performed. This type of study is not relevant for this device, which is an automated or manually read laboratory diagnostic for antimicrobial susceptibility, not an AI-assisted diagnostic tool that aids human readers in interpretation. The device itself performs the susceptibility test.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the performance data presented is effectively standalone performance of the device (MicroScan Dried Gram-Negative MIC/Combo Panels with Cefepime). The device "read either visually or with MicroScan instrumentation" and its performance (Essential Agreement, Categorical Agreement) is directly compared to the reference standard. The "human-in-the-loop" would be the laboratory professional reading the results, and the study evaluates the accuracy of the device itself in producing those results. Where visual reads are mentioned, it's about the device's ability to produce clear inhibition patterns for visual interpretation, not a human independently interpreting raw data without the device.
7. The Type of Ground Truth Used
The ground truth used was established by a CLSI frozen Reference Panel. This is a recognized and standardized method for determining antimicrobial susceptibility, often involving broth microdilution or agar dilution methods where organisms are tested against known concentrations of antimicrobials. It is a highly controlled and quantitative method to determine the true MIC value against which the device's performance is compared.
8. The Sample Size for the Training Set
The document does not mention a training set or any details about its sample size. This is consistent with the nature of the device. AST systems are generally rule-based or empirically derived systems based on established microbiological principles, rather than machine learning models that require distinct training sets. The development of such panels involves extensive empirical testing during the R&D phase to ensure the correct concentrations and formulations, but this isn't typically referred to as a "training set" in the context of an AI/ML model.
9. How the Ground Truth for the Training Set was Established
As no training set (in the AI/ML sense) is indicated, this point is not applicable.
Ask a specific question about this device
(219 days)
The MicroScan Dried Gram-Positive MIC/Combo Panel is used to determine quantitative and qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative gram-positive cocci, some fastidious aerobic gram-positive cocci and Listeria monocytogenes. After inoculation, panels are incubated for 16-20 hours at 35°C ± 1°C in a non-CO2 incubator, and read either visually or with MicroScan instrumentation, according to the Package Insert.
This particular submission is for the addition of the antimicrobial daptomycin at concentrations of 0.06-32 µg/mL to the test panel. Testing is indicated for Enterococcus faecium, Enterococcus spp. other than E. faecium, and Staphylococcus spp., as recognized by the FDA Susceptibility Test Interpretive Criteria (STIC) webpage.
The MicroScan Dried Gram-Positive MIC/Combo Panels with Daptomycin (DAP) (0.06-32 µg/mL) has demonstrated acceptable performance with the following organisms:
- Enterococcus faecium
- Enterococcus spp. other than E. faecium (Enterococcus faecalis, Enterococcus avium, Enterococcus raffinosus, Enterococcus casseliflavus and Enterococcus durans)
- Staphylococcus spp. (Staphylococcus aureus, Staphylococcus epidermidis, Staphylococcus capitis, Staphylococcus haemolyticus, Staphylococcus lugdunensis, Staphylococcus hominis, Staphylococcus warneri, Staphylococcus simulans, Staphylococcus saprophyticus, Staphylococcus intermedius, and Staphylococcus sciuri)
MicroScan Dried Gram-Positive MIC/Combo Panels are designed for use in determining quantitative and/or qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative anaerobic gram-positive bacteria.
The principle of MicroScan panels with antimicrobial susceptibility tests are miniaturizations of the broth dilution susceptibility test that have been diluted in broth and dehydrated. Various antimicrobial agents are diluted in broth to concentrations bridging the range of clinical interest. Panels are rehydrated with water after inoculation with a standardized suspension of the organism. After incubation in a non-CO2 incubator for 16-20 hours, the minimum inhibitory concentration (MIC) for the test organism is read by determining the lowest antimicrobial concentration showing inhibition of growth.
This product is single-use and intended for laboratory professional use.
The provided text specifies the performance validation of the MicroScan Dried Gram-Positive MIC/Combo Panels with Daptomycin (DAP). Here's a breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics against a CLSI (Clinical and Laboratory Standards Institute) frozen Reference Panel. The primary metrics are Essential Agreement (EA) and Categorical Agreement (CA).
| Organism Group | Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|---|
| Staphylococcus spp. | Essential Agreement (EA) | Acceptable performance | 94.3% |
| Categorical Agreement (CA) | Acceptable performance | 99.5% | |
| Enterococcus faecium | Essential Agreement (EA) | Acceptable performance | 90.8% |
| Categorical Agreement (CA) | Acceptable performance | 92.0% | |
| Enterococcus species other than E. faecium | Essential Agreement (EA) | Acceptable performance | 100.0% |
| Categorical Agreement (CA) | Acceptable performance | 94.1% |
Note: The document states "acceptable performance" without defining specific numerical thresholds for EA and CA as explicit "acceptance criteria." However, the reported values are presented as meeting this "acceptable performance." In antimicrobial susceptibility testing, typical FDA guidance for acceptable EA is generally $\geq$90% and for CA is general $\geq$90% to 95%, depending on the organism/drug combination and resistance rates.
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "external evaluations" conducted with:
- Fresh and stock Efficacy isolates
- Stock Challenge strains
It does not explicitly state the numerical sample size (number of isolates/strains) used for the test set.
The data provenance is not explicitly stated regarding country of origin or whether it was retrospective or prospective. Given the mention of "external evaluations," it implies that the testing was performed, but the location and study design (retrospective/prospective) are not detailed. Standard AST studies typically involve prospective collection of clinical isolates and/or the use of well-characterized reference strains.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The ground truth for the test set was established by comparison with a CLSI frozen Reference Panel. This implies that the reference standard, and thus the "ground truth," is determined by a highly standardized and validated laboratory method (broth microdilution or similar CLSI-recognized standard).
The document does not mention human experts being used directly to establish the ground truth for individual cases in the test set. Instead, the CLSI frozen Reference Panel itself serves as the gold standard.
4. Adjudication Method for the Test Set
No human adjudication method (e.g., 2+1, 3+1) is mentioned or implied, as the ground truth is established by the CLSI frozen Reference Panel, not by human interpretation or consensus.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
This is not applicable to this device. This device is an Antimicrobial Susceptibility Test (AST) panel, which determines the Minimum Inhibitory Concentration (MIC) of an antimicrobial agent against bacteria. It does not involve human readers interpreting images, and therefore, an MRMC study is not relevant. The device measures a quantitative result (MIC) which is then interpreted categorically (susceptible, intermediate, resistant) based on established breakpoints.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, this study represents a standalone performance evaluation of the device. The "MicroScan Dried Gram-Positive MIC/Combo Panels with Daptomycin" is a diagnostic test system. Its performance is evaluated against a recognized reference method (CLSI frozen Reference Panel) to determine its accuracy in reporting MIC values and categorical interpretations. While human operators inoculate and read the panels (either visually or with MicroScan instrumentation), the "performance" being assessed here is the device's ability to produce accurate results compared to the reference standard, not an AI algorithm's ability to interpret data without human input.
7. The type of Ground Truth Used
The ground truth used was comparison to a CLSI frozen Reference Panel. This is a laboratory-based gold standard for antimicrobial susceptibility testing, representing the accepted accurate measurement of MIC. It is a highly controlled and standardized method.
8. The Sample Size for the Training Set
The document does not mention a "training set" in the context of machine learning or AI. This is an AST panel, not an AI/ML diagnostic algorithm that requires a training set. The performance validation is based on a comparison to a reference standard using a test set of bacterial isolates/strains.
9. How the Ground Truth for the Training Set Was Established
As there is no concept of a training set for this type of device (AST panel based on traditional microbiological principles), this question is not applicable.
Ask a specific question about this device
(186 days)
The MicroScan Dried Gram-Negative MIC/Combo Panel is used to determine quantitative and qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative anaerobic gram-negative bacilli. After inoculation, panels are incubated for 16-20 hours at 35°C ± 1°C in a non-CO2 incubator, and read either visually or with MicroScan instrumentation, according to the Package Insert.
This particular submission is for the addition of the antimicrobial aztreonam at concentrations of 0.5-64 µg/mL to the test panel. Testing is indicated for Enterobacterales and Pseudomonas aeruginosa, as recognized by the FDA Susceptibility Test Interpretive Criteria (STIC) webpage.
The MicroScan Dried Gram-Negative MIC/Combo Panels with Aztreonam (AZT) (0.5-64 µg/mL) has demonstrated acceptable performance with the following organisms:
Enterobacterales (Citrobacter freundii complex, Citrobacter koseri, Escherichia coli, Klebsiella oxytoca, Klebsiella pneumoniae, Proteus mirabilis, Morganella morganii, Yersinia enterocolitica)
Pseudomonas aeruginosa
MicroScan Dried Gram-Negative MIC/Combo Panels are designed for use in determining quantitative and qualitative antimicrobial agent susceptibility of colonies grown on solid media of rapidly growing aerobic and facultative anaerobic gram-negative bacilli.
The principle of MicroScan panels with antimicrobial susceptibility tests are miniaturizations of the broth dilution susceptibility test that have been diluted in broth and dehydrated. Various antimicrobial agents are diluted in broth to concentrations bridging the range of clinical interest. Panels are rehydrated with water after inoculation with a standardized suspension of the organism. After incubation in a non-CO2 incubator for 16-20 hours, the minimum inhibitory concentration (MIC) for the test organism is read by determining the lowest antimicrobial concentration showing inhibition of growth.
This product is single-use and intended for laboratory professional use.
The provided FDA 510(k) clearance letter pertains to an Antimicrobial Susceptibility Test (AST) system, specifically the MicroScan Dried Gram-Negative MIC/Combo Panels with Aztreonam. It is not an AI/ML medical device. Therefore, many of the requested criteria regarding AI-specific study design (like MRMC studies, number of experts for AI ground truth, training set details) are not applicable to this type of device and study.
However, I can extract the relevant acceptance criteria and performance data for this AST device based on the provided document.
Acceptance Criteria and Device Performance (for an AST System)
The study proves the device's performance through comparison with a CLSI (Clinical and Laboratory Standards Institute) frozen Reference Panel. The criteria primarily revolve around "Essential Agreement (EA)" and "Categorical Agreement (CA)" between the new device and the reference method.
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Metric | Acceptance Criteria (Implicit from FDA Guidance*) | Reported Device Performance (Aztreonam) | Relevant Organisms | Notes |
|---|---|---|---|---|
| Essential Agreement (EA) | Generally, >90% (based on "acceptable performance" for similar devices in FDA guidance) | 91.0% | Enterobacterales | Refers to agreement within one doubling dilution of the reference MIC. |
| Essential Agreement (EA) | Generally, >90% | 91.2% | Pseudomonas aeruginosa | Refers to agreement within one doubling dilution of the reference MIC. |
| Categorical Agreement (CA) | Generally, >90% (based on "acceptable performance") | 93.1% | Enterobacterales | Refers to agreement in clinical categorization (Susceptible, Intermediate, Resistant). |
| Categorical Agreement (CA) | Generally, >90% | 86.0%* | Pseudomonas aeruginosa | *Footnote states "Essential agreement of evaluable isolates 90.3% and most of the categorical discrepancies were minor errors," implying this was deemed acceptable despite being below 90% in raw number. |
| Reproducibility | Acceptable reproducibility and precision | Demonstrated acceptable reproducibility and precision | Aztreonam | Across different inoculum methods (Turbidity, Prompt) and instruments (autoSCAN-4, WalkAway). |
| Quality Control | Acceptable results for Quality Control | Demonstrated acceptable results | Aztreonam | Standard QC strains. |
Note: The document implicitly refers to the "Class II Special Controls Guidance Document: Antimicrobial Susceptibility Test (AST) Systems; Guidance for Industry and FDA", dated August 28, 2009. This guidance typically defines the statistical acceptance criteria for EA and CA for AST systems. The document states the device "demonstrated substantially equivalent performance when compared with a CLSI frozen Reference Panel, as defined in the FDA document..." meeting "acceptable performance."
2. Sample Size Used for the Test Set and Data Provenance
- The document mentions "external evaluations were conducted with contemporary and stock Efficacy isolates and stock Challenge strains."
- Specific numerical sample sizes for the test set (number of isolates/strains) are not explicitly stated in the provided text.
- Data Provenance: The document does not specify the country of origin. It indicates the use of "contemporary and stock Efficacy isolates and stock Challenge strains," which suggests a mix of clinical and laboratory strains. The study appears to be prospective in nature, as new data was generated for this specific submission to demonstrate performance against a reference standard.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- This is an AST system, not an AI/ML device requiring expert radiological annotation.
- Ground Truth Establishment: The ground truth (reference MIC values and categorical interpretations) for the test set was established by a CLSI frozen Reference panel. This is a recognized standard method for AST device validation. The "experts" in this context are the established CLSI methodologies and laboratories that produce these reference panels, not individual human readers or annotators in the typical AI/ML sense.
4. Adjudication Method for the Test Set
- Adjudication, as typically described (e.g., 2+1, 3+1), is not applicable here because the ground truth is established by a standardized laboratory method (CLSI frozen Reference panel), not by consensus among human experts annotating medical images. The comparison is objective, based on measured MIC values.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not done. This type of study is specific to diagnostic imaging devices where human readers interpret medical images with and without AI assistance.
- This device is an in vitro diagnostic (IVD) antimicrobial susceptibility test system, where the output is a MIC value and a categorical interpretation for a bacterial isolate, not an image interpretation by a human observer.
6. If a Standalone (Algorithm Only Without Human-in-the Loop Performance) Was Done
- This question is framed for AI/ML algorithms. While the device automation ("MicroScan instrumentation," "WalkAway instrument") is a component, the "standalone performance" here refers to the device's ability to accurately determine MIC and categorize susceptibility when compared to the CLSI reference method.
- The study did evaluate the device's performance independently of human interpretation, as it explicitly states panels can be "read either visually or with MicroScan instrumentation." The reported EA and CA numbers reflect the system's performance, including automated reading where applicable.
7. The Type of Ground Truth Used
- Reference Standard: The ground truth used was a CLSI frozen Reference Panel. This is considered the gold standard for comparing the performance of new antimicrobial susceptibility test devices. It provides "true" Minimum Inhibitory Concentration (MIC) values for the bacterial isolates against the antimicrobial agent.
8. The Sample Size for the Training Set
- This is an IVD device, not an AI/ML system that undergoes a separate "training" phase with a large dataset in the sense of machine learning. The device's underlying "knowledge" is built into its design, chemistry, and reading algorithms (for automated methods).
- Therefore, the concept of a "training set" as understood in AI/ML is not applicable to this device.
9. How the Ground Truth for the Training Set Was Established
- As the concept of a "training set" as in AI/ML does not apply here, this point is not applicable. The device's development involved standard microbiological and analytical chemistry principles, validated against established reference methods.
Ask a specific question about this device
(266 days)
Access hsTnI is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of cardiac troponin I (cTnI) levels in human serum and plasma using the Access 2 Immunoassay Analyzers to aid in the diagnosis of myocardial infarction (MI).
The Access hsTnI assay is a two–site immunoenzymatic ("sandwich") assay. Monoclonal anti–cTnI antibody conjugated to alkaline phosphatase is added to a reaction vessel along with a surfactant–containing buffer and sample. After a short incubation, paramagnetic particles coated with monoclonal anti–cTnI antibody are added. The human cTnI binds to the anti–cTnI antibody on the solid phase, while the anti–cTnI antibody–alkaline phosphatase conjugate reacts with different antigenic sites on the cTnI molecules. After incubation, materials bound to the solid phase are held in a magnetic field while unbound materials are washed away. Then, the chemiluminescent substrate is added to the vessel and light generated by the reaction is measured with a luminometer. The light production is directly proportional to the concentration of analyte in the sample. Analyte concentration is automatically determined from a stored calibration.
The provided text describes the 510(k) clearance for the Beckman Coulter Access hsTnI device, specifically focusing on demonstrating its equivalence when run on the DxC 500i Clinical Analyzer compared to the previously cleared Access 2 Immunoassay System. The "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context refer to the analytical performance characteristics required to show substantial equivalence between the new platform (DxC 500i) and the cleared predicate platform (Access 2) for the Access hsTnI assay.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Parameter | Acceptance Criteria (New DxC 500i vs. Predicate Access 2 for Access hsTnI) | Reported Device Performance (Access hsTnI on DxC 500i) |
|---|---|---|
| Platform Equivalency (Method Comparison - Serum)Slope (Passing-Bablok) | Slope 1.00 ± 0.10 | 1.001 |
| Platform Equivalency (Method Comparison - Serum)Slope 95% CI | N/A (implied by slope criteria) | 0.976 – 1.020 |
| Platform Equivalency (Method Comparison - Serum)Intercept (pg/mL) | N/A | -0.184 |
| Platform Equivalency (Method Comparison - Serum)Correlation Coefficient (r) | N/A (implied by clinical performance criteria for r) | 0.999 |
| Platform Equivalency (Method Comparison - Plasma)Slope (Passing-Bablok) | Slope 1.00 ± 0.10 | 0.997 |
| Platform Equivalency (Method Comparison - Plasma)Slope 95% CI | N/A (implied by slope criteria) | 0.978 – 1.016 |
| Platform Equivalency (Method Comparison - Plasma)Intercept (pg/mL) | N/A | 0.560 |
| Platform Equivalency (Method Comparison - Plasma)Correlation Coefficient (r) | N/A (implied by clinical performance criteria for r) | 0.999 |
| Clinical Performance (Method Comparison)Slope | 1.00 ± 0.10 | Met the acceptance criteria (specific value not reported, but stated to be within range) |
| Clinical Performance (Method Comparison)Correlation Coefficient (r) | ≥ 0.90 | Met the acceptance criteria (specific value not reported, but stated to be within range) |
| Imprecision (Total within-laboratory) for levels < 11.5 pg/mL | ≤ 1.15 pg/mL | 0.3 to 0.5 pg/mL (for serum and plasma) |
| Imprecision (Total within-laboratory) for levels ≥ 11.5 pg/mL | ≤ 10.0% | 2.3% to 5.8% (serum), 5.0% to 7.0% (plasma) |
| Linearity (non-linearity) for values < 11.5 pg/mL | within ± 1.15 pg/mL | Verified to be within this limit (specific error not given) |
| Linearity (non-linearity) for values ≥ 11.5 pg/mL | within ± 10% | Verified to be within this limit (specific error not given) |
| Detection Capability (LoB) | < 4.0 pg/mL | Largest observed: 0.2 pg/mL |
| Detection Capability (LoD) | < 4.0 pg/mL | Largest observed: 1.0 pg/mL |
| Detection Capability (LoQ) at 20% CV | ≤ 5.0 pg/mL | Largest observed: 1.0 pg/mL |
| Detection Capability (LoQ) at 10% CV | ≤ 11.5 pg/mL | Largest observed: 1.3 pg/mL |
| Carryover (Intra-assay) | Performance to be evaluated and labeled | Estimated 3-5 pg/mL from 270,000 pg/mL sample; 5-8 pg/mL from 500,000 pg/mL sample |
| Carryover (Inter-assay) | Performance to be evaluated and labeled | Expected < 3.5 pg/mL from 27,000 pg/mL sample; can be up to 77.8 pg/mL from > 1,000,000 pg/mL sample |
Note: The acceptance criteria for the "Platform Equivalency" and "Clinical Performance" studies are largely the same (slope 1.00 ± 0.10 and r ≥ 0.90), indicating the central intent was to show the DxC 500i performs equivalently to the Access 2 for this assay.
2. Sample Size Used for the Test Set and Data Provenance
- Platform Equivalency Study (Method Comparison - Representative) Sample Size:
- Serum: N = 106
- Plasma: N = 122
- Clinical Performance (Method Comparison) Sample Size:
- "More than 200 discrete lithium heparin plasma samples" per site for a total of three sites (2 external, 1 internal). This implies a total sample size of >600 for this specific study.
- Imprecision, Linearity, Detection Capability, and Carryover studies: Sample sizes are not explicitly stated for these, but they are implied to be sufficient for the CLSI standards cited (EP05-A3, EP06-2nd Edition, EP17-A2).
- Data Provenance: The studies were conducted at "two external tests sites and one internal test site." The specific country of origin is not mentioned, but "Beckman Coulter, Inc." is based in California, USA. The data is prospective as it involves controlled studies and analyses to demonstrate performance on the new platform.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This section is Not Applicable to this device. The Access hsTnI assay is an in vitro diagnostic (IVD) test that quantitatively measures a biomarker (cardiac troponin I). The "ground truth" for method comparison and analytical performance studies of IVDs is typically established by comparative analysis against a reference method or a legally marketed predicate device, as seen here. It does not involve human expert interpretation of images or signals that would require expert consensus for ground truth.
4. Adjudication Method for the Test Set
This section is Not Applicable. As stated above, this is an IVD device for quantitative measurement. The "test set" in this context refers to patient samples with varying concentrations of the analyte, measured by both the candidate and predicate devices. There is no human interpretation or subjective assessment that would require adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This section is Not Applicable. The device is a quantitative immunoassay, not an AI-powered diagnostic imaging or interpretation tool that assists human readers. Therefore, an MRMC study is not relevant.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
This section is Not Applicable in the context of an "algorithm only" being evaluated for standalone performance. The Access hsTnI device on the DxC 500i is a laboratory instrument system performing a chemical assay. Its "performance" is inherently "standalone" in the sense that the instrument provides a quantitative result without immediate human-in-the-loop assistance for that specific measurement. The "comparison testing" essentially evaluates its standalone performance against a predicate standalone device.
7. The Type of Ground Truth Used
The "ground truth" for the test set was essentially:
- Measurement by the legally marketed predicate device (Access hsTnI on Access 2 Immunoassay System): This is the gold standard against which the performance of the Access hsTnI on the DxC 500i Clinical Analyzer is compared to demonstrate substantial equivalence.
- Definitions within CLSI guidelines: For parameters like precision, linearity, and detection capability, the "ground truth" is adherence to statistical and analytical performance models defined by the specified CLSI (Clinical and Laboratory Standards Institute) guidelines.
8. The Sample Size for the Training Set
This section is Not Applicable. The Access hsTnI is a reagent and instrument system for an immunoassay. It is not an AI/ML algorithm that requires "training data" in the typical sense. The term "training set" is usually associated with machine learning models. The development and validation of such IVD assays involve extensive R&D, method development, and verification on an internal set of samples, but these are not referred to as a "training set" in the context of AI.
9. How the Ground Truth for the Training Set was Established
This section is Not Applicable for the reasons stated in point 8.
Ask a specific question about this device
(223 days)
The Access Cortisol assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of cortisol levels in human serum, plasma (heparin, EDTA) and urine using the Access Immunoassay Systems. A cortisol (hydrocortisone and hydroxycorticosterone) test system is a device intended to measure the cortisol hormones secreted by the adrenal gland in serum, plasma and urine. Measurements of cortisol are used in the diagnosis and treatment of disorders of the adrenal gland.
The DxC 500i Clinical Analyzer combines the DxC 500 AU Clinical Chemistry Analyzer and the Access 2 Immunoassay System into a single instrument presentation. The system is for in vitro diagnostic use only.
The chemistry module of the DxC 500i Clinical Analyzer is an automated chemistry analyzer that measures analytes in samples, in combination with appropriate reagents, calibrators, quality control (OC) material and other accessories. The immunoassay module of the DxC 500i Clinical Analyzer is an in-vitro diagnostic device used for the quantitative, semiquantitative, or qualitative determination of various analyte concentrations found in human body fluids.
The Access Cortisol assay is a competitive binding immuno-enzymatic assay designed for use on Beckman Coulter's Access immunoassay analyzers in a clinical laboratory setting.
The DxC 500i Clinical Analyzer is an integrated chemistry-immunoassay work cell that combines Beckman Coulter's DxC 500 AU Clinical Chemistry Analyzer and the Access 2 Immunoassay System into a single instrument presentation. The DxC 500i instrument has a single user interface and common point of entry for sample racks; the sample handling unit operates as a parallel processor and sample manager for both sides of the instrument. The DxC 500i operates in conjunction with the existing reagents, calibrators, controls, and system solutions for the AU and Access instrument families.
The provided text describes the Beckman Coulter Access Cortisol assay on the DxC 500i Clinical Analyzer and its comparison to a predicate device. Here's a breakdown of the acceptance criteria and study information:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Method Comparison | Slope criteria of 1.00 ± 0.12 (using Weighted Deming regression analysis when compared to predicate device) | Serum: Slope = 0.974 (95% CI: 0.952 - 0.996) Urine: Slope = 1.002 (95% CI: 0.976 - 1.029) |
| Linearity | Linear throughout the analytical measuring range. | Determined to be linear throughout the analytical measuring range (2.3 - 60.0 µg/dL). |
| Imprecision (Repeatability & Total) | Allowable imprecision of <12% at approximately 5 µg/dL and <10% for higher concentrations. | Control L1 (approx 3 µg/dL): Instrument 1: Repeatability CV = 4.8%, Total CV = 6.4% Instrument 2: Repeatability CV = 5.6%, Total CV = 7.2% Control L2 (approx 17.7 µg/dL): Instrument 1: Repeatability CV = 3.2%, Total CV = 3.6% Instrument 2: Repeatability CV = 2.8%, Total CV = 4.1% Control L3 (approx 27 µg/dL): Instrument 1: Repeatability CV = 3.0%, Total CV = 3.7% Instrument 2: Repeatability CV = 2.9%, Total CV = 3.6% Serum Pool (approx 47-49 µg/dL): Instrument 1: Repeatability CV = 2.7%, Total CV = 3.8% Instrument 2: Repeatability CV = 3.0%, Total CV = 5.0% |
| Detection Capability | Not explicitly stated as acceptance criteria, but results are provided for LoB, LoD, and LoQ. | Limit of Blank (LoB): ≤ 0.4 µg/dL Limit of Detection (LoD): ≤ 0.4 µg/dL Limit of Quantitation (LoQ) ≤ 20% within-lab CV: ≤ 0.8 µg/dL |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Method Comparison:
- Serum: N = 104
- Urine: N = 115
- Imprecision: For each of the two instruments and four control levels/serum pools, N = 80 measurements were performed (this implies 20 days x 2 replicates x 2 runs, or similar, over the course of the study).
- Provenance: The document does not specify the country of origin of the data or whether the studies were retrospective or prospective. It does refer to "patient samples" for the method comparison, suggesting real-world samples.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not applicable as the device is an in-vitro diagnostic test for cortisol levels, not an imaging device requiring expert interpretation of results for ground truth. The "ground truth" for comparison and performance evaluation is generally established by the predicate device's measurements and reference methods.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not applicable for this type of in-vitro diagnostic device. Adjudication methods like 2+1 or 3+1 are typically used in studies involving subjective human interpretation (e.g., radiology reads) where discrepancies between assessors need to be resolved. For quantitative laboratory tests, "ground truth" is typically established by comparing against established reference methods or predicate devices.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not applicable as the device is an in-vitro diagnostic test for cortisol levels, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This concept is not directly applicable in the same way it would be for an AI algorithm. The Access Cortisol assay on the DxC 500i Clinical Analyzer is an automated in-vitro diagnostic system. The "standalone" performance is essentially what is being evaluated in the method comparison, linearity, imprecision, and detection capability studies, where the system itself generates the quantitative results without subjective human interpretation as part of the core measurement. Human operators are involved in running the instrument and interpreting the numerical results, but the measurement itself is automated.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The "ground truth" in these studies is established by:
- Predicate Device Measurements: For the method comparison study, the Access Cortisol assay run on the predicate Access 2 Immunoassay System serves as the comparator or "reference" for evaluating the candidate device (Access Cortisol assay on the DxC 500i Clinical Analyzer).
- Reference Materials/Established Protocols: For linearity, imprecision, and detection capability, the ground truth is often tied to known concentrations in control materials, spiked samples, and established analytical performance specifications derived from CLSI guidelines. The calibrators use "Human serum containing cortisol (purified chemical compound) at levels of 0 and approximately 2, 5, 10, 25, and 60 µg/dL," with traceability to "USP Reference Material."
8. The sample size for the training set
This information is not provided and is likely not explicitly documented in the submission for an IVD device of this nature. The instrument doesn't explicitly 'train' in the machine learning sense on a dataset. Its operational parameters are set during its development and validated through studies like those described.
9. How the ground truth for the training set was established
As there's no explicitly mentioned "training set" in the context of machine learning, this question is not directly applicable. The device's operational characteristics and performance specifications are established through rigorous analytical verification and validation using known standards, controls, and patient samples, rather than a "training set" in the AI sense.
Ask a specific question about this device
(109 days)
The Access Syphilis assay is a paramagnetic particle, chemiluminescent immunoassay for the qualitative detection of total antibodies to Treponema pallidum in human serum and plasma using the Access lmmunoassay Systems. It is intended to be used as an aid in the diagnosis of syphilis or in conjunction with a nontreponemal laboratory test and clinical findings to aid in the diagnosis of syphilis infection. The Access Syphilis assay is not intended for blood and tissue donor screening.
The Access Syphilis assay is a two-step enzyme immunoassay. A sample is added to a reaction vessel with buffer, paramagnetic particles coated with recombinant Treponema pallidum antigens Tp17 and Tp47, and Tp47, and biotinylated Treponema Tp17 & Tp47 antigens. After incubation in a reaction vessel, materials bound to the solid phase are held in a magnetic field while unbound materials are washed away. Alkaline phosphatase conjugates are added, and the conjugates bind to the immunoglobulin captured on the particles. A chemilyminescent substrate is added to the vessel and light generated by the reaction is measured with a luminometer. The light production is proportional to the amount of Treponema pallidum antibodies in the sample. The light quantity measured for a sample allows a determination of the presence of the analyte by comparison with a cut-off value defined during the assay calibration on the instrument. The Access Syphilis reagents are provided in liquid ready-to-use format designed for optimal performance on the Beckman Coulter Access Immunoassay Systems. Each reagent kit contains two reagent packs.
The Access Syphilis assay is a qualitative immunoassay for detecting total antibodies to Treponema pallidum in human serum and plasma, used as an aid in diagnosing syphilis. The device was evaluated for its clinical performance on two systems: the Access 2 Immunoassay System and the Dxl 9000 Access Immunoassay Analyzer. The acceptance criteria and performance data are primarily based on percent agreement with a composite reference method.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state pre-defined acceptance criteria values for Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA). However, the results presented are implicitly the "performance" that aims to demonstrate substantial equivalence. The following table summarizes the reported performance in key clinical evaluation cohorts:
| Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Access 2 & Dxl 9000) |
|---|---|---|
| PPA (Overall Intended Use Population) | High agreement expected, generally >95% | 100% (184/184) [95% CI: 98.0 - 100%] |
| NPA (Overall Intended Use Population) | High agreement expected, generally >95% | 96.7% (890/920) [95% CI: 95.4 - 97.7%] |
| PPA (Retrospective Specimens) | High agreement expected, generally >95% | 100% (398/398) [95% CI: 99.0 - 100%] |
| NPA (Retrospective Specimens) | High agreement expected, generally >95% | 25.0% (1/4) [95% CI: 4.6 - 69.9%] * |
| PPA (High-Risk Individuals) | High agreement expected, generally >95% | 100% (20/20) [95% CI: 83.9 - 100%] |
| NPA (High-Risk Individuals) | High agreement expected, generally >95% | 80.0% (24/30) [95% CI: 62.7 - 90.5%] |
*Note on NPA for Retrospective Specimens: The low NPA for retrospective specimens is due to a very small number of non-reactive specimens in the comparator (only 4), making the percentage highly sensitive to a single misclassification. The 31 reactive results from Access Syphilis where the comparator was nonreactive need further investigation, which the document attributes to "3 specimens were reactive by treponemal immunoassay and nonreactive by RPR and TPPA". This suggests potential discordance with the composite comparator definition for these few cases rather than a broad failing of the device's negative detection.
2. Sample Size for the Test Set and Data Provenance:
The study involved a total of 1,910 specimens for clinical performance evaluation. These specimens were broadly characterized as:
- 1,104 prospectively collected specimens from the intended use population. The age range was 12 to >89 years, with 63.8% female and 36.2% male. This cohort included:
- 399 patients sent for syphilis testing
- 405 pregnant women
- 300 HIV positive patients
- 402 retrospective specimens from patients (including 22 from pregnant females).
- 204 prospectively collected specimens from apparently healthy individuals.
- 150 retrospective specimens from patients with medically diagnosed syphilis (primary, secondary, and latent stages).
- 50 retrospective specimens from individuals at high-risk of sexually transmitted disease.
The provenance of the data is multicenter, meaning specimens were collected from multiple clinical sites. Both retrospective and prospective data were used. The document does not explicitly state the country of origin, but the submission to the U.S. FDA suggests a U.S. or international clinical setting adhering to FDA standards.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
The document does not mention the use of human experts to establish the ground truth for the test set in the traditional sense (e.g., radiologists interpreting images). Instead, the "final comparator result" (ground truth) was established using a composite testing algorithm of multiple FDA-approved laboratory assays, which is standard practice for in vitro diagnostic devices.
4. Adjudication Method for the Test Set:
The adjudication method used a composite testing algorithm as the "final comparator result." This algorithm consisted of:
- An FDA-approved predicate treponemal immunoassay.
- A non-treponemal assay (RPR - Rapid Plasma Reagin).
- A second treponemal assay (TPPA - Treponema Pallidum Particle Agglutination).
The document does not detail a specific "2+1" or "3+1" adjudication process involving human review for discordant results beyond the internal algorithmic comparison. However, for discordant results in the HIV positive patient subpopulation (where the Access Syphilis assay showed lower NPA), an additional FDA-cleared electrochemiluminescent immunoassay was used to further evaluate 28 discordant specimens.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic assay, not an imaging AI device that involves human readers interpreting images with or without AI assistance. The performance is assessed by comparing the device's output to established laboratory reference methods, not human interpretation.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, the clinical performance study is a standalone performance evaluation. The Access Syphilis assay is an automated immunoassay system that provides results independently. Its performance (reactive/non-reactive) is directly compared against the composite reference standard without human interpretation of the assay's raw output.
7. The type of ground truth used:
The ground truth was established using a composite testing algorithm consisting of results from:
- An FDA-approved predicate treponemal immunoassay.
- A non-treponemal assay (RPR).
- A second treponemal assay (TPPA).
This acts as a "reference standard" or "expert consensus" in the context of laboratory diagnostics, where consensus among multiple established tests determines the final disease status. In some cases, "medically diagnosed syphilis" for specific retrospective cohorts also contributed to the understanding of the samples.
8. The sample size for the training set:
The document does not specify a separate training set size for the Access Syphilis assay. As a chemiluminescent immunoassay, it is a biochemical test, not an AI/machine learning algorithm that typically requires a discrete "training set" in the same way. The assay's parameters would have been optimized during its development phase using internal studies, but these are not referred to as a "training set" in the context of typical AI device submissions.
9. How the ground truth for the training set was established:
Since a distinct "training set" in the AI/ML sense is not described, the method for establishing its ground truth is not applicable in the document. The development of an immunoassay involves extensive analytical and clinical validation, which ensures the reagents and system accurately detect the target antibodies. This process implicitly refines the assay's performance against known positive and negative samples, similar to how a training set might function for an algorithm.
Ask a specific question about this device
(157 days)
The UniCel DxH 900/DxH 690T Coulter Cellular Analysis System is a quantitative, multi-parameter, automated hematology analyzer for in vitro diagnostic use in screening patient populations found in clinical laboratories.
The DxH 900/DxH 690T analyzer identifies and enumerates the following parameters:
· Whole Blood (Venous or Capillary): WBC, RBC, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV, NE%, NE#, LY%, LY#, MO%, MO#, EO%, EO%, BA%, BA#, NRBC%, NRBC#, RET%, RET#, MRV, IRF
· Pre-Diluted Whole Blood (Venous or Capillary): WBC, RBC, HGB, HCT, MCV, MCH, MCHC, RDW, RDW-SD, PLT, MPV
· Body Fluids (cerebrospinal, serous or synovial): TNC and RBC
The UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.
The UniCel DxH 900/DxH 690T System contains an automated hematology analyzer (DxH 900 or DxH 690T) designed for in vitro diagnostic use in screening patient populations by clinical laboratories. The system provides a Complete Blood Count (CBC), Leukocyte 5-Part Differential (Diff), Reticulocyte (Retic), Nucleated Red Blood Cell (NRBC) on whole blood, as well as, Total Nucleated Count (TNC), and Red Cell Count (RBC) on Body Fluids (cerebrospinal, serous and synovial).
The DxH Slidemaker Stainer II is a fully automated slide preparation and staining device that aspirates a whole-blood sample, smears a blood film on a clean microscope slide, and delivers a variety of fixatives, stains, buffers, and rinse solutions to that blood smear.
The DxH 900 System may consist of a workcell (multiple connected DxH 900 instruments with or without a DxH Slidemaker Stainer II), a stand-alone DxH 900, or a stand-alone DxH Slidemaker Stainer II. The DxH 690T System consists of a stand-alone DxH 690T instrument.
The provided text is a 510(k) Summary for a medical device submission (K240252) for the UniCel DxH 900/DxH 690T Coulter Cellular Analysis System and the UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System. This document focuses on demonstrating substantial equivalence to predicate devices rather than proving a device meets specific acceptance criteria as would be the case for a novel device or a device requiring clinical efficacy trials.
Therefore, the acceptance criteria are largely implied by the performance of the predicate device and established CLSI (Clinical and Laboratory Standards Institute) guidelines for analytical performance. The "study" proving the device met acceptance criteria is a series of non-clinical performance verification tests designed to demonstrate that the new devices (DxH 900, DxH 690T, SMS II) perform "as well as or better than" the predicate devices (DxH 800, SMS) across various analytical parameters.
Here's an attempt to structure the information based on your request, understanding that the context is substantial equivalence testing, not a novel device demonstrating de novo clinical acceptance.
Device Under Evaluation for Substantial Equivalence:
- UniCel DxH 900 Coulter Cellular Analysis System
- UniCel DxH Slidemaker Stainer II Coulter Cellular Analysis System
- UniCel DxH 690T Coulter Cellular Analysis System
Predicate Devices:
- UniCel DxH 800 Coulter Cellular Analysis System (K193124)
- UniCel DxH Slidemaker Stainer Coulter Cellular Analysis System (K162414)
The "acceptance criteria" for the new devices are generally linked to demonstrating performance comparable to, or better than, the predicate devices, adhering to established analytical performance standards (e.g., CLSI guidelines). The "study" involves various analytical performance tests comparing the subject devices to the predicate.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are generally implied by the predicate device's performance specifications and adherence to CLSI guidelines. The performance reported below demonstrates that the subject devices meet these implicit criteria (i.e., they perform comparably to the predicate). Due to the extensive list of parameters and a lack of explicit, single "acceptance limit" given for each, I will provide summary tables where possible, extrapolating from the comprehensive data provided in the text.
a. Repeatability (Within-run Imprecision)
- Acceptance Criteria (Implied): Percent Coefficient of Variation (%CV) or Standard Deviation (SD) within specified limits, typically reflecting clinically acceptable imprecision and comparable to predicate device performance.
- Reported Device Performance (DxH 900 - selected parameters; all passed):
| Parameter | Units | Level | N | Test Result Mean | Test Result %CV or SD | Conclusion |
|---|---|---|---|---|---|---|
| WBC | x10³ cells/µL | 5.000 - 10.000 | 10 | 5.80 | 0.69% CV | Pass |
| RBC | x10⁶ cells/µL | 4.500 - 5.500 | 10 | 4.72 | 0.50% CV | Pass |
| Hgb | g/dL | 14.00 - 16.00 | 10 | 15.03 | 0.36% CV | Pass |
| Platelet | x10³ cells/µL | 200.0 - 400.0 | 10 | 256.80 | 1.35% CV | Pass |
| Neut % | % | 50.0 - 60.0 | 10 | 58.24 | 0.99 %CV | Pass |
| Retic % | % | 0.000 - 1.500 | 10 | 1.17 | 0.13 SD | Pass |
| BF-RBC | cells/mm³ | 10,000 - 15,000 | 10 | 12,643 | 2.42 %CV | Pass |
| BF-TNC | cells/mm³ | 50-2,000 | 10 | 594 | 1.28 %CV | Pass |
(Similar comprehensive data provided for DxH 690T, all passed.)
b. Reproducibility (Across-site/day/instrument Imprecision)
- Acceptance Criteria (Implied): Reproducibility (Total CV%) to be within clinically acceptable limits, demonstrating consistent performance across different instruments, days, and runs. The test instruments met the reproducibility specifications for all parameters.
- Reported Device Performance (DxH 900-3S workcell - examples for Level 1, all passed):
| Parameter | Unit | N (total) | Reproducibility CV% | Conclusion |
|---|---|---|---|---|
| WBC | 10^3 cells/uL | 90 | 2.30 | Pass |
| RBC | 10^6 cells/uL | 90 | 0.66 | Pass |
| HGB | g/dL | 90 | 0.52 | Pass |
| PLT | 10^3 cells/uL | 90 | 1.64 | Pass |
| BF TNC | cells/mm^3 | 90 | 7.28 | Pass |
| BF RBC | cells/mm^3 | 90 | 3.97 | Pass |
c. Linearity
- Acceptance Criteria: Deviation between measured and predicted values to be within specified acceptance limits for each parameter across the analytical measuring interval (AMI). All instances showed "Pass".
- Reported Linearity Ranges (all passed on DxH 900-3S workcell):
| Parameter | Units | Linearity Range Results |
|---|---|---|
| WBC | 10³ cells/µL | 0.064 - 408.5 |
| RBC | 10⁶ cells/µL | 0.001 - 8.560 |
| PLT | 10³ cells/µL | 3.2 - 3002 |
| HGB | g/dL | 0.04 - 26.070 |
| BF-RBC | cells/mm³ | 1113.10 - 6,353,906 |
| BF-TNC | cells/mm³ | 31.50 - 92,745 |
d. Carryover
- Acceptance Criteria: Carryover to be below specified percentages/event counts (e.g., for WBC, RBC, Hgb, PLT limits <0.5% or <1.0%; for Diff, NRBC, Retic, event counts <200, <75, <600 respectively). All test instruments met these specifications.
- Reported Device Performance (DxH 900 & DxH 690T - examples for WBC > 90):
| CnDR Mode | WBC | RBC | Hgb | PLT | Diff | NRBC | Retic |
|---|---|---|---|---|---|---|---|
| DxH 690T | 0.11% | 0.03% | 0.26% | 0.07% | 22, 20, 35 | 12, 5, 7 | 25, 20, 22 |
| DxH 900 | 0.09% | 0.05% | 0.23% | 0.17% | 11,15,37 | 7,1,3 | 10,6,3 |
| Spec | <0.5% | <0.5% | <1.0% | <1.0% | <200 Events | <75 Events | <600 Events |
e. Performance Detection Capability Limits (LLoQ)
- Acceptance Criteria: Limits of Quantitation (LoQ) to be ≤ specified goals. All results met the acceptance limits.
- Reported LLoQ Results (all passed):
| Parameter | Units | Acceptance limit (LoQ) | LoQ result | Conclusion |
|---|---|---|---|---|
| WBC | x10³ cells/μL | ≤0.050 | 0.019 | Pass |
| PLT | x10³ cells/μL | ≤3.000 | 0.757 | Pass |
| BF TNC | cells/mm³ | ≤20.000 | 14.004 | Pass |
| BF RBC | cells/mm³ | ≤1000.000 | 979.869 | Pass |
f. Method Comparison (Accuracy vs. Predicate)
-
Acceptance Criteria (Implied): Statistical analysis (e.g., mean difference, bias, correlation coefficient) showing substantial equivalence to the predicate device within clinically acceptable limits for all parameters across the AMI. The data confirmed substantial equivalence.
-
Reported Device Performance (All Sites Combined - selected parameters, all passed comparisons to predicate):
- Whole Blood (DxH 900 vs. DxH 800):
- Difference (Mean): e.g., HCT: -0.297, HGB: -0.022, WBC: 0.055, PLT: -0.075, NE %: 0.014
- Correlation (R): e.g., HCT: 0.99860, HGB: 0.99950, WBC: 0.99960, PLT: 0.99890, NE %: 0.99860
- Body Fluid (BF-TNC, BF-RBC - DxH 900 vs. DxH 800):
- Difference (Mean): BF RBC: 2601.802, BF TNC: 50.878
- Correlation (R): BF RBC: 0.99991, BF TNC: 0.99989
- Whole Blood (DxH 900 vs. DxH 800):
g. Flagging Performance (Comparison to Predicate)
- Acceptance Criteria (Implied): Negative Percent Agreement (NPA), Positive Percent Agreement (PPA), and Overall Percent Agreement (OPA) with the predicate device (DxH 800) should be high, demonstrating comparable flagging capabilities. All categories showed high agreement.
- Reported Device Performance (DxH 900 vs. DxH 800):
| Category of Abnormalities | NPA (95% CI) | PPA (95% CI) | OPA (95% CI) | Conclusion |
|---|---|---|---|---|
| Morphological Abnormalities for WB | 0.9555 (0.9341 to 0.9702) | 0.8853 (0.8362 to 0.9211) | 0.9347 (0.9145 to 0.9504) | Pass |
| Distributional Study of WB | 0.8810 (0.8389 to 0.9131) | 0.9501 (0.9256 to 0.9668) | 0.9224 (0.9008 to 0.9397) | Pass |
2. Sample Size Used for the Test Set and Data Provenance
-
Test Set (Samples/Analytes):
- Repeatability: Varied. For whole blood, 10 aspirations per measurement. For some abnormal/low levels, contrived samples/control material used.
- Reproducibility: 90 replicates per parameter (3 instruments x 5 days x 2 times/day x 3 shots/instrument).
- Linearity: Minimum of seven (7) dilutions, tested in quadruplicate in random order on three (3) instruments for each measurand.
- Carryover: Not specified for number of patient samples, but testing used three (3) high target value tubes and three (3) low target value tubes, repeated three (3) times.
- Performance Detection Capability Limits (LoB, LLoD, LLoQ):
- LoB: 120 whole blood cycles (DxH diluent as blank, 5 replicates x 20 total for whole blood, 120 single tube BF cycles for body fluid), using 2 lots of diluent and cell lyse reagents.
- LLoD/LLoQ: 4 samples per parameter, 3 sets of 11 dilutions from stock, 1 set analyzed on each of 3 instruments. 5 replicates per dilution level.
- Reference Ranges: Not explicitly stated but indicated as sufficient per CLSI EP28-A3c to verify reference intervals, usually requiring a sufficient number of healthy individuals (e.g., >120).
- Method Comparison (Whole Blood): 735 whole blood specimens from 3 clinical sites (adult and pediatric samples).
- Method Comparison (Body Fluid): 195 body fluid specimens (BF TNC), 130 body fluid specimens (BF RBC) from multiple sites.
- Flagging Analysis: 735 whole blood samples (residual normal and abnormal) from three (3) clinical sites.
-
Data Provenance: Data collected from multiple clinical sites (indicated for method comparison and flagging analysis), and testing included analysis on workcell configurations (DxH 900-3S workcell) as well as stand-alone instruments. The data appears to be prospective as it involves active testing of the new devices. The countries of origin are not specified but typical of FDA submissions, implies US-based or international sites compliant with US regulations.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not mention experts and their qualifications for establishing ground truth. This is a common characteristic of analytical performance studies for IVD devices like hematology analyzers. The "ground truth" for quantitative measurements (e.g., WBC, RBC counts) is typically the measurement itself obtained from a reference method or predicate device, often with rigorous calibration and quality control. For qualitative aspects like flagging, the ground truth is established by comparing the flag output to the predicate device's flag output, assuming the predicate's performance is already validated. There is no indication of human "expert" review for individual case ground truth for these types of measurements.
4. Adjudication Method for the Test Set
Since ground truth is based on predefined analytical measurements against reference methods/predicate devices rather than human interpretation, an adjudication method like 2+1 or 3+1 (common in image-based AI studies) is not applicable and not mentioned in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. These types of studies are typically performed for AI-assisted diagnostic devices where the AI is intended to improve human reader performance (e.g., radiologists interpreting images). This submission is for an automated hematology analyzer, where the device performs the analysis directly without human interpretation in the loop in the same way. The evaluation is focused on the device's analytical performance (accuracy, precision, linearity, etc.) compared to its predicate device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the core of this submission is a standalone performance evaluation of the DxH 900/690T systems. The "algorithm" here refers to the instrument's internal processing of raw signals to derive reported parameters (e.g., cell counts, differentials). Its performance is assessed independently of human intervention during the measurement process, and its output is compared to a reference standard (predicate device) and expected analytical capabilities.
7. The Type of Ground Truth Used
The ground truth for the analytical performance studies (repeatability, linearity, method comparison, etc.) is based on:
- Reference Method / Predicate Device Comparison: The performance of the UniCel DxH 900/DxH 690T is directly compared against the established performance of the legally marketed predicate UniCel DxH 800 and previously cleared UniCel DxH Slidemaker Stainer. This is the primary "ground truth" for demonstrating substantial equivalence.
- CLSI Guidelines: Adherence to established CLSI (Clinical and Laboratory Standards Institute) protocols (e.g., EP09c, H26-A2, EP05-A3, EP06-A, EP17-A2, EP28-A3c) implicitly defines the "truth" or expected range of acceptable analytical performance for these types of in vitro diagnostic devices.
- Control Materials and Calibrators: Certified reference materials and quality control products (e.g., COULTER 6C Cell Control, COULTER S-CAL Calibrator) are used to establish and verify instrument accuracy and precision, serving as a daily "ground truth" check.
- Fresh Patient Samples: Used in method comparison and other studies to ensure real-world performance reflects clinical conditions.
There is no mention of "expert consensus," "pathology," or "outcomes data" being used as ground truth in the traditional sense for these analytical performance studies of a hematology analyzer.
8. The Sample Size for the Training Set
The document does not provide information on the training set size. Hematology analyzers, while highly sophisticated, are typically rule-based systems or calibrated analytical instruments, rather than machine learning/AI models that require explicit "training sets" in the modern sense (e.g., thousands of labeled images for deep learning). Their "training" involves instrument calibration using standardized calibrators. If there are adaptive algorithms or older machine learning components, their "training" data is internal to the manufacturer's development process and not typically disclosed in a 510(k) summary focused on analytical validation.
9. How the Ground Truth for the Training Set Was Established
As noted above, the concept of a "training set" and its "ground truth" in the context of a modern AI/ML device is not directly applicable to this traditional analytical instrument unless it has specific, undisclosed internal adaptive algorithms. The "ground truth" for the instrument's operational accuracy and precision is primarily established through its calibration process using certified calibrator materials and verified against quality control materials and comparisons to reference methods or predicate devices as part of its manufacturing and analytical validation.
Ask a specific question about this device
(85 days)
Access Thyroglobulin assay is a paramagetic particle, chemiluminescent immunossay for the quantitative determination of thyroglobulin levels in human serum and plasma using the Access Immunoassay Systems. This device is intended to aid in monitoring for the presence of persistent or recurrent/metastatic disease in patients who have differentiated thyroid cancer (DTC) and have had thyroid surgery (with or without ablative therapy), and who lack serum thyroglobulin antibodies.
The Access Thyroglobulin assay consists of the reagent pack and calibrators. Other items needed to run the assay include the Access Thyroglobulin Sample Diluent, substrate and wash buffer. The Access Tg assay along with the Access wash buffer and substrate are designed for use with the Access Immunoassay Systems in a clinical laboratory setting.
Lumi-Phos PRO substrate was used with this pack. The modification does not affect the indications of the device or alter the fundamental scientific technology of the device.
The provided text is a 510(k) summary for the Access Thyroglobulin assay, which is a diagnostic device and not an AI/ML device. Therefore, many of the requested categories related to AI/ML device studies, such as "Number of experts used to establish the ground truth," "Adjudication method," "MRMC comparative effectiveness study," and "sample size for the training set," are not applicable.
However, I can extract the relevant information regarding acceptance criteria and study results for this diagnostic device.
Acceptance Criteria and Reported Device Performance for Access Thyroglobulin Assay (K240927)
1. Table of Acceptance Criteria and the Reported Device Performance
| Performance Metric | Acceptance Criteria (Implicit from reported results and CLSI guidelines) | Reported Device Performance (Access Thyroglobulin on Dxl 9000) |
|---|---|---|
| Method Comparison | Slope of 1.00 (95% CI covering 1.00); Intercept of 0.00 (95% CI covering 0.00); High Correlation Coefficient (R close to 1.00) | Slope: 1.00 (0.99 - 1.00); Intercept: 0.0044 (-0.029 - 0.021); Correlation Coefficient R: 1.00 |
| Imprecision (Within-lab/Total) | CV ≤ 10.0% at concentrations > 1.0 ng/mL; SD ≤ 0.1 ng/mL at concentrations ≤ 1.0 ng/mL | Achieved across all tested concentrations (e.g., 8.4% at 0.30 ng/mL, 6.8% at 5.5 ng/mL, 6.3% at 22 ng/mL, 2.5% at 111 ng/mL, 3.6% at 376 ng/mL, 3.6% at 417 ng/mL) |
| Reproducibility | Not explicitly stated as a separate acceptance criterion, but results imply meeting acceptable reproducibility for clinical use. | Example: Within-run CV 5.9% (0.34 ng/mL), Reproducibility CV 7.4% (0.34 ng/mL); Within-run CV 2.5% (402 ng/mL), Reproducibility CV 5.9% (402 ng/mL) |
| Linearity | Assay demonstrates linearity across the measuring interval. | Demonstrated linearity across the measuring interval. |
| Limit of Blank (LoB) | Not explicitly stated as a numerical criterion, but a low value is expected for accurate detection. | 0.03 ng/mL |
| Limit of Detection (LoD) | Not explicitly stated as a numerical criterion, but a low value is expected for accurate detection. | 0.05 ng/mL |
| Limit of Quantitation (LoQ) ≤20% within-lab CV | ≤ 0.1 ng/mL at 20% within-lab CV (explicitly stated criteria) | 0.1 ng/mL |
2. Sample sizes used for the test set and the data provenance
- Method Comparison: N = 187 samples. Data provenance is not specified (e.g., country of origin, retrospective/prospective).
- Imprecision: For each sample, N = 88 or 80. Data provenance is not specified.
- Reproducibility: For each sample, N = 75. Data provenance is not specified.
- Linearity, LoB, LoD, LoQ: Sample sizes for specific points within the linearity study or number of samples for LoB/LoD/LoQ determinations are not explicitly given, but the studies were conducted using "multiple samples," "multiple reagent lots," and "multiple days." Data provenance is not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This is a quantitative immunoassay for measuring thyroglobulin levels. The 'ground truth' for such a device is established by the analytical reference measurement procedures using a reference method or known concentrations, rather than expert consensus on diagnostic images or clinical assessments. Therefore, this question is not applicable in the context of this device.
4. Adjudication method for the test set
Not applicable for a quantitative immunoassay. The comparison is statistical analysis of measured values against a predicate device or expected values from reference materials.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is not an AI/ML device involving human readers or interpretation.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
The device is an automated immunoassay system. The performance studies ("Method Comparison," "Imprecision," "Reproducibility," "Linearity," "Detection Capability") represent the standalone performance of the assay and instrument without human interpretation of raw signals influencing the final quantitative result.
7. The type of ground truth used
For this immunoassay device, the "ground truth" implicitly refers to:
- Reference measurements from the predicate device (Access 2 Immunoassay System): Used for the method comparison study.
- Known concentrations/reference materials: Used to assess imprecision, linearity, and detection capabilities (LoB, LoD, LoQ) against expected values.
8. The sample size for the training set
Not applicable. This is a traditional diagnostic device, not an AI/ML device that requires a training set.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(18 days)
Access Thyroglobulin assay is a paramagetic particle, chemiluminescent immunossay for the quantitative determination of thyroglobulin levels in human serum and plasma using the Access Immunoassay Systems. This device is intended to aid in monitoring for the presence of persistent or recurrent/metastatic disease in patients who have differentiated thyroid cancer (DTC) and have had thyroid surgery (with or without ablative therapy), and who lack serum thyroglobulin antibodies.
The Access Thyroqlobulin assay consists of the reagent pack and calibrators. Other items needed to run the assay include the Access Thyroglobulin Sample Diluent, substrate and wash buffer. The Access Tq assay along with the Access wash buffer and substrate are designed for use with the Access Immunoassay Systems in a clinical laboratory setting.
The change does not impact or change the other components that are used with this reagent pack. The modification does not affect the indications of the device or alter the fundamental scientific technology of the device.
A description of the reagent pack is provided below.
| Well | Ingredients |
|---|---|
| R1a: | Dynabeads* paramagnetic particles coated with streptavidinand coupled to biotinylated mouse monoclonalantithyroglobulin antibodies, suspended in a TRIS buffer withprotein (bovine), < 0.1% sodium azide, and 0.1% ProClin**300. |
| R1b: | Mouse monoclonal anti-thyroglobulin-alkaline phosphatase(bovine) conjugate in a TRIS buffer with protein (bovine,murine), < 0.1% sodium azide, and 0.1% ProClin 300. |
| R1c: | HEPES buffer with protein (bovine and mouse), < 0.1% sodiumazide, and 0.5% ProClin 300. |
The provided document is a 510(k) Premarket Notification from the FDA for the Access Thyroglobulin assay. It does not describe an AI/ML-based medical device. Therefore, many of the requested criteria about AI/ML studies (such as MRMC studies, ground truth establishment for training sets, number of experts for test set ground truth, etc.) are not applicable to this submission.
The acceptance criteria and study proving the device meets them are related to the analytical performance of an immunoassay, not a software algorithm.
Here's a breakdown based on the provided text, addressing the applicable points and noting where information is not present or not relevant to AI/ML:
1. A table of acceptance criteria and the reported device performance
The document focuses on demonstrating substantial equivalence to a predicate device, primarily through a matrix comparison study for a new sample type (plasma in addition to serum). The "acceptance criteria" are implied by the statistical analyses and acceptable ranges for slope, intercept, and correlation coefficient in the matrix comparison, aiming for agreement between the new sample type and the established serum sample type.
Acceptance Criteria (Implied by Study Design for Matrix Comparison):
For the Matrix Comparison study, the implicit acceptance criteria are that the Passing-Bablok linear regression results (slope, intercept, and correlation coefficient) demonstrate substantial equivalence between the new sample types (Li-heparin plasma, Na-heparin plasma) and serum. While explicit numeric acceptance criteria are not stated, typically for such comparisons, a slope close to 1, an intercept close to 0, and a high correlation coefficient (e.g., >0.97) are expected within their confidence intervals.
Reported Device Performance (Matrix Comparison):
| Plasma/Serum | N | Range (ng/mL) | Slope (95% CI) | Intercept (95% CI) | Correlation Coefficient (r) |
|---|---|---|---|---|---|
| Li-heparin plasma vs Serum | 45 | 0.227 to 494.070 | 1.000 (0.983; 1.015) | 0.163 (-0.212; 0.712) | 0.999 |
| Na-heparin plasma vs Serum | 45 | 0.227 to 494.070 | 1.021 (1.010; 1.039) | 0.147 (-0.246; 0.952) | 0.999 |
Other Performance Claims Transferred from Predicate:
The document states that claims for "method comparison, imprecision, reproducibility, high-dose hook effect, linearity, dilution recovery, detection capability and analytical specificity are being transferred from file K220972." This implies these studies were performed and met acceptance criteria for the predicate device, and the current modification (addition of plasma sample type) does not invalidate them. Explicit tables for these are not in the provided text.
2. Sample sizes used for the test set and the data provenance
- Test Set Sample Size: For the Matrix Comparison study, 45 matched sets of serum and plasma samples were used for each comparison (Li-heparin plasma vs Serum, and Na-heparin plasma vs Serum). The minimum specified was 40 matched sets.
- Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective. Given it's a clinical lab device, the samples would typically be from clinical settings.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This is not applicable as the device is an immunoassay, not an AI/ML device relying on human interpretation of images or other complex data for ground truth. The "ground truth" here is the quantitative measurement of thyroglobulin by the predicate method (serum measurement) against which the new sample type (plasma measurement) is compared. The reference values are analytical measurements, not expert consensus.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable for an immunoassay analytical validation. The ground truth (serum concentration) is established by the assay itself, not by human adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is not an AI/ML device, and there are no "human readers" interpreting images assisted by AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a sense. The "standalone" performance for this device refers to its analytical performance as a laboratory test. The Matrix Comparison study assesses the device's capability to accurately measure thyroglobulin in plasma samples compared to serum samples, without human interpretive input affecting the measurement itself. The results shown in point 1 demonstrate this "standalone" analytical performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The ground truth for the matrix comparison study was the quantitative thyroglobulin concentration measured in human serum using the previously cleared Access Thyroglobulin assay. The new sample types (plasma) were compared to these established serum values.
8. The sample size for the training set
Not applicable. This is not an AI/ML device that requires a training set. The assay's parameters are determined through reagent development and analytical validation, not machine learning training.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
Page 1 of 25