Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K032162
    Date Cleared
    2003-08-08

    (24 days)

    Product Code
    Regulation Number
    866.3780
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access Toxo IgG assay is a paramagnetic-particle, chemiluminescent immunoassay for the qualitative and quantitative determination of IgG antibodies to Toxoplasma gondii in human serum, using the Access Immunoassay Systems. The Access Toxo IgG assay aids in the diagnosis of Toxoplasma gondii infection and in the determination of protective levels of antibodies in pregnant women.

    Device Description

    The Access Toxo IgG reagents consist of reagent packs, calibrators, QC, substrate and wash buffer.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and the study that proves the device meets them:

    1. A table of acceptance criteria and the reported device performance

    The provided text does not explicitly state numerical acceptance criteria for the studies. Instead, it makes a general statement about the device meeting established criteria.

    Acceptance CriteriaReported Device Performance
    ReproducibilityMet established acceptance criteria
    ConcordanceDemonstrated acceptable concordance
    LinearityDemonstrated acceptable linearity
    Correlation (DxI vs. Access 2)Demonstrated good correlation

    2. Sample size used for the test set and the data provenance

    The document does not explicitly state the sample size used for the test set. It mentions "concordance study data" for the method comparison, implying a test set was used, but the size is not specified. The provenance of the data (e.g., country of origin, retrospective or prospective) is also not mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. The study focuses on comparing two instrument platforms for an immunoassay, not on establishing a ground truth for diagnostic accuracy based on expert review.

    4. Adjudication method for the test set

    This information is not applicable and not provided. The study is a comparison between two laboratory instrument platforms, not an assessment requiring expert adjudication of cases.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    An MRMC comparative effectiveness study was not done. This study is about the performance of an immunoassay on a new instrument platform, not about AI assistance for human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This concept is not directly applicable. The "device" in question is an immunoassay system (hardware and reagents), not a standalone algorithm. The study assesses the performance of this system as a whole. Its performance is inherently "standalone" in the sense that the instrument processes the samples and delivers results without human interpretation within the assay measurement, but it's not an algorithm in the typical AI sense.

    7. The type of ground truth used

    The concept of "ground truth" as it typically applies to diagnostic accuracy (e.g., pathology, outcomes data) is not directly relevant here. The study aims to demonstrate substantial equivalence between two instrument platforms using the same assay. The "ground truth" for the comparison would be the results generated by the already cleared "Access 2" System, which the new "DxI 800" System is being compared against.

    8. The sample size for the training set

    This information is not provided because an AI/machine learning training set is not applicable to an immunoassay instrument and reagent system. This is a traditional laboratory diagnostic device.

    9. How the ground truth for the training set was established

    This information is not provided and is not applicable, as there is no "training set" in the context of this immunoassay system.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031606
    Date Cleared
    2003-06-20

    (29 days)

    Product Code
    Regulation Number
    866.3510
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K954687, K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access Rubella IgG assay is a paramagnetic-particle, chemiluminescent immunoassay for the qualitative and quantitative determination of IgG antibodies to the rubella virus in human serum using the using the Access Immunoassay Systems. The Access Rubella IgG assay aids in the diagnosis of rubella infection and the determination of immunity.

    Device Description

    The Access Rubella IgG reagents consist of reagent packs, calibrators, QC, substrate and wash buffer.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Access® Rubella IgG Assay:

    Acceptance Criteria and Device Performance:

    Acceptance CriteriaReported Device Performance
    Establishment of substantial equivalence to the legally marketed predicate device (Access Rubella IgG Assay on Access 2 System).The Access Rubella IgG assay on the Dxl System met the established acceptance criteria for reproducibility and concordance.
    Acceptable linearity for the Access Rubella IgG assay on the Dxl System.The assay demonstrated acceptable linearity.
    Good correlation between the Dxl and Access 2 Systems in a method comparison (linear regression) study using clinical data.Good correlation was demonstrated.

    Study Details:

    1. Sample sizes used for the test set and the data provenance: Not explicitly stated. The document mentions "clinical data" but does not specify the number of samples or their country of origin (retrospective or prospective).

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable for this type of device. The ground truth for this immunoassay device is typically established through a combination of reference methods (e.g., CDC reference assays), serology algorithms, and potentially clinical correlation, rather than expert consensus on images or clinical cases. The document does not provide details on how the ground truth for the "clinical data" used in the method comparison was established.

    3. Adjudication method for the test set: Not applicable. Adjudication methods like 2+1 or 3+1 are typically used for qualitative assessments, often in image interpretation, where human readers might disagree. This device is an immunoassay, and its performance is evaluated quantitatively (reproducibility, linearity, correlation).

    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This is an immunoassay device, not an AI-assisted diagnostic tool that aids human readers. Therefore, an MRMC study and effects on human reader performance are not relevant.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, the described studies (reproducibility, concordance, linearity, method comparison) evaluate the standalone performance of the Access Rubella IgG assay on the Dxl System. The device provides a quantitative result (IgG antibody levels) or a qualitative determination (positive/negative) without human interpretation of raw data in the same way an imaging AI algorithm might be interpreted.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The specific ground truth for the "clinical data" used in the method comparison study is not explicitly detailed. For an immunoassay like this, ground truth for rubella infection and immunity typically relies on established serological reference methods, clinical history, and potentially a combination of IgM and IgG results, rather than pathology or expert consensus on clinical presentation alone. The predicate device (Access 2 System) is implicitly considered a reference standard for the purpose of demonstrating substantial equivalence.

    7. The sample size for the training set: Not applicable. This device is an immunoassay system, not a machine learning or AI algorithm that requires a "training set" in the conventional sense. The "training" for such a system involves the development and optimization of reagents and protocols, which is a different process than training an AI model.

    8. How the ground truth for the training set was established: Not applicable, as there is no "training set" in the AI/ML context for this device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031506
    Date Cleared
    2003-06-02

    (19 days)

    Product Code
    Regulation Number
    866.3780
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access Toxo IgM II assay is a paramagnetic-particle, chemiluminescent immunoassay for the qualitative determination of Toxoplasma gondii-specific IgM antibody in adult human serum and plasma, using the Access Immunoassay Systems. The Access Toxo IgM II assay is presumptive for the diagnosis of acute, recent or reactivated Toxoplasma gondii infections in males and pregnant or non-pregnant females. It is recommended this assay be performed in conjunction with a Toxoplasma gondii-specific IgG antibody assay.

    Device Description

    The Access Toxo IgM II reagents consist of reagent packs, controls, QC, substrate and wash buffer.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Access Toxo IgM II Assay, based on the provided document:

    The document describes a modification to an existing device rather than a new device submission. The primary goal of the study was to demonstrate substantial equivalence of the modified device (Access Toxo IgM II assay on the UniCel DxI 800 Access Immunoassay System) to the legally marketed unmodified device (Access Toxo IgM II assay on the Access 2 System). Therefore, the "acceptance criteria" and "device performance" are focused on comparability between the two systems.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state numerical acceptance criteria in terms of sensitivity, specificity, or accuracy for the assay itself. Instead, the acceptance criteria are implicitly related to demonstrating substantial equivalence between the two instrument platforms (Access 2 vs. UniCel DxI 800) for the same assay.

    Acceptance Criteria CategorySpecific Criteria (Implicit)Reported Device Performance
    ReproducibilityAssay results on the UniCel DxI 800 System should demonstrate consistent and repeatable measurements.Met established acceptance criteria.
    ConcordanceAssay results from the UniCel DxI 800 System should agree with those from the Access 2 System.Met established acceptance criteria.
    Method ComparisonStrong statistical correlation between results from the UniCel DxI 800 System and the Access 2 System.Showed good correlation between the DxI and Access 2 Systems.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify the exact sample size used for the reproducibility and concordance studies. It also does not explicitly state the country of origin of the data or whether the study was retrospective or prospective.

    • Sample Size: Not specified.
    • Data Provenance: Not specified (country of origin, retrospective/prospective).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This type of study (demonstrating equivalence between two instrument platforms for an immunoassay) typically does not involve human experts establishing a "ground truth" for individual test cases in the same way imaging or diagnostic algorithms might. The "ground truth" for comparability would be the results from the predicate device system (Access 2 System).

    • Number of Experts: Not applicable for this type of comparison study.
    • Qualifications of Experts: Not applicable.

    4. Adjudication Method for the Test Set

    Since the "ground truth" is derived from the predicate device's results, there is no mention of an adjudication method involving multiple human readers.

    • Adjudication Method: Not applicable.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not performed. This study focuses on instrument equivalence for an immunoassay, not on human reader performance with or without AI assistance.

    • Effect Size: Not applicable.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, in essence, this was a standalone performance study of the instrument platform running the assay. The focus was on the performance of the automated immunoassay system (UniCel DxI 800) itself, comparing its results to an established system (Access 2), without direct human interpretation influencing the raw assay results.


    7. Type of Ground Truth Used

    The "ground truth" in this context was the results obtained from the legally marketed predicate device (Access 2 Immunoassay System) using the same Access Toxo IgM II reagents. The study aimed to show that the new system produced equivalent results to the established system.


    8. Sample Size for the Training Set

    This document describes a validation study for a medical device's performance on a new instrument, not the development of a machine learning algorithm. Therefore, there is no "training set" in the context of AI/ML.

    • Sample Size for Training Set: Not applicable.

    9. How the Ground Truth for the Training Set Was Established

    As there is no training set mentioned in the context of an AI/ML algorithm, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031270
    Date Cleared
    2003-05-06

    (15 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access CEA assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of Carcinoembryonic Antigen (CEA) levels in human serum, using the Access Immunoassay Systems. CEA measured by the Access Immunoassay Systems is used as an aid in the management of cancer patients in whom changing CEA concentrations have been observed.

    Device Description

    The Access® CEA reagents consist of reagent packs, calibrators, bi-level controls, substrate and wash buffer.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a modification to the Access® CEA Reagents on the Access® Immunoassay Systems. The modification involves adding a new instrument platform, the Beckman Coulter UniCel™ Dxl 800 Access® Immunoassay System, to the existing family of Access Immunoassay Systems.

    The study aimed to demonstrate that the Access CEA assay on the Dxl system is substantially equivalent to the Access CEA assay on the Access 2 system.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document states that the Access CEA assay met established acceptance criteria for "method comparison, precision and analytical sensitivity." However, the specific numerical acceptance criteria for these parameters (e.g., specific ranges for agreement, coefficients of variation, or limits of detection) are not detailed in the provided summary. Similarly, the reported performance values from the studies (e.g., actual method comparison results, precision data, or analytical sensitivity figures) are not provided. The text only offers a general statement of compliance.

    Acceptance Criteria (Specifics Not Provided)Reported Device Performance (Specifics Not Provided)
    Method Comparison (e.g., % agreement, bias limits)Met established criteria
    Precision (e.g., %CV, within-run, between-run)Met established criteria
    Analytical Sensitivity (e.g., Limit of Detection)Met established criteria

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify the sample sizes used for the method comparison, precision, or analytical sensitivity studies. It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature of the samples).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This device is an immunoassay for quantitative determination of Carcinoembryonic Antigen (CEA) levels, which relies on a chemical reaction to produce a numerical result. Therefore, there is no ground truth established by human experts in the same way it would be for an imaging device requiring expert interpretation. The "ground truth" for evaluating this device would be established by reference methods or established analytical standards. The document does not provide details on how this was established for the comparison.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Since this is an immunoassay seeking substantial equivalence to a predicate device, the "adjudication method" in the context of human expert review is not applicable. The evaluation relies on quantitative analytical comparisons, not human interpretation consensus.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not performed. This type of study is relevant for medical imaging or diagnostic interpretation tasks where human readers are involved. This device is an automated immunoassay system, and its evaluation focuses on analytical performance metrics rather than human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the studies described (method comparison, precision, and analytical sensitivity) inherently represent standalone performance of the algorithm/device. The device itself performs the quantitative determination of CEA levels. Human involvement would be in operating the instrument and interpreting the numerical output, but the performance being evaluated is that of the automated system. The document states that the new system (Dxl) uses the same reagents and calibrators as the predicate (Access 2), implying that the algorithm/assay itself is unchanged, and the evaluation is on the new instrument platform's ability to produce equivalent results.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The "ground truth" for these types of studies would typically be established by:

    • Predicate device results: For method comparison, the results from the legally marketed Access® CEA Reagents on the Access® Immunoassay Analyzer (K981985, K991707) would serve as the comparator or "reference."
    • Reference materials/standards: For precision and analytical sensitivity, the device would be tested against known concentrations of CEA or characterized control materials.

    The document does not explicitly state the type of ground truth used beyond indicating that method comparison was against the predicate device.

    8. The sample size for the training set

    The document does not mention a training set. This is because the device is an immunoassay kit/system, not an artificial intelligence or machine learning algorithm that requires a distinct training phase. The "training" of such a system would involve its initial development and validation by the manufacturer, but not in the context of a dataset used to optimize an AI model.

    9. How the ground truth for the training set was established

    As there is no "training set" in the context of an AI/ML algorithm for this immunoassay device, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031269
    Date Cleared
    2003-05-02

    (11 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access Thyroglobulin assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of thyroglobulin levels in human serum and plasma, using the Access Immunoassay Systems. This device is intended to aid in the monitoring for the presence of local and metastatic thyroid tissue in patients who have had thyroid gland ablation (using thyroid surgery with or without radioactivity) and who lack serum thyroglobulin antibodies.

    Device Description

    The Access® Thyroglobulin reagents consist of reagent packs, calibrators, substrate and wash buffer.

    AI/ML Overview

    The provided text describes a 510(k) summary for the Access® Thyroglobulin Reagents on the Access® Immunoassay Systems. The submission is for a modification to add a new instrument platform (Beckman Coulter UniCel™ DxI 800 Access® Immunoassay System) to the existing system. The core of the study is to demonstrate substantial equivalence between the new instrument platform and the previously cleared Access 2 system.

    Here's an analysis of the acceptance criteria and study as requested:

    1. Table of acceptance criteria and the reported device performance

    Acceptance Criteria CategoryDevice Performance Report
    Method ComparisonThe Access Thyroglobulin assay met the established acceptance criteria for method comparison when tested on the DxI system compared to the Access 2 system.
    PrecisionThe Access Thyroglobulin assay met the established acceptance criteria for precision when tested on the DxI system.
    Analytical SensitivityThe Access Thyroglobulin assay met the established acceptance criteria for analytical sensitivity when tested on the DxI system.

    Note: The specific numerical values or ranges for "established acceptance criteria" are not provided in the summary. The document only states that the criteria were met.

    2. Sample size used for the test set and the data provenance

    The document states that "method comparison, precision and analytical sensitivity studies were conducted." However, it does not provide any details on the sample sizes used for these studies or the data provenance (e.g., country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not applicable to this type of device and study. The device is an immunoassay for quantitative determination of a biomarker, not an imaging or diagnostic device requiring expert interpretation for ground truth. The ground truth for such assays would typically be defined by reference methods or established laboratory standards.

    4. Adjudication method for the test set

    This information is not applicable as the study design does not involve human readers or interpretations that would require adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This information is not applicable. This is a laboratory immunoassay device, not an AI-assisted diagnostic tool that involves human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done. The "Supporting Data" section indicates that "method comparison, precision and analytical sensitivity studies were conducted" for the Access Thyroglobulin assay on the DxI system to demonstrate substantial equivalence to the Access 2 system. This refers to the analytical performance of the instrument and reagents as a standalone system.

    7. The type of ground truth used

    The document does not explicitly state the "type of ground truth" in terms of clinical outcomes or pathology. For an immunoassay seeking substantial equivalence for a new instrument platform, the "ground truth" or reference for comparison would typically be the performance of the legally marketed predicate device (Access® Thyroglobulin Reagents on the Access® Immunoassay Systems on the Access 2 system). The studies would aim to show that the new platform produces equivalent analytical results (method comparison, precision, analytical sensitivity) to the predicate.

    8. The sample size for the training set

    This information is not applicable. The device is an immunoassay system, not an AI/machine learning algorithm that requires a "training set" in the conventional sense. The "training" for such a system involves calibration using specific calibrator materials mentioned in the device description.

    9. How the ground truth for the training set was established

    This information is not applicable for the reasons stated in point 8. The "ground truth" for calibration would be established by the manufacturer based on certified reference materials or established laboratory standards for the calibrators.

    Ask a Question

    Ask a specific question about this device

    K Number
    K031297
    Date Cleared
    2003-05-02

    (8 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K023764

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Access OV Monitor assay is a paramagnetic particle, chemiluminescent immunoassay for the quantitative determination of CA 125 antigen levels in human serum and plasma, using the Access Immunoassay Systems. This device is indicated for use in the measurement of CA 125 antigen to aid in the management of ovarian cancer patients. Serial testing for patient CA 125 antigen concentrations should be used in conjunction with other clinical methods used for monitoring ovarian cancer.

    Device Description

    The Access® OV Monitor reagents, calibrators and the Access Immunoassay Systems family comprise the Access Immunoassay System for the quantitative determination of CA 125 antigen in human serum and plasma.

    AI/ML Overview

    The provided text describes the 510(k) submission for a modification to the Access OV Monitor Assay. The modification is to add a new instrument platform, the Beckman Coulter UniCel™ DxI 800 Access® Immunoassay System. The study conducted aimed to demonstrate that the Access OV Monitor assay on the DxI system is substantially equivalent to the Access OV Monitor assay on the Access 2 system.

    Here's an analysis of the acceptance criteria and the study based on the provided information:

    1. A table of acceptance criteria and the reported device performance:

    The document states: "The Access OV Monitor assay met the established acceptance criteria for method comparison, precision and analytical sensitivity." However, the specific quantitative acceptance criteria and the detailed reported performance values are not provided in this summary.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document does not explicitly state the sample size used for the test set in the method comparison, precision, or analytical sensitivity studies.
    • Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not applicable as the device is an in-vitro diagnostic (IVD) immunoassay, not an AI or image-based diagnostic requiring expert interpretation for ground truth. The ground truth would typically be established through analytical methods and reference materials.

    4. Adjudication method for the test set:

    This information is not applicable for an IVD immunoassay. Adjudication methods like 2+1 or 3+1 are typically used for subjective assessments by experts, which is not the case here.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    This information is not applicable as the device is an IVD immunoassay and does not involve human readers interpreting images with or without AI assistance. The study focuses on the analytical performance of the immunoassay on a new instrument.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The study conducted was a standalone performance evaluation of the Access OV Monitor assay on the DxI system, demonstrating its analytical performance without human intervention in the assay's execution, beyond standard lab procedures. The device itself (the immunoassay system) operates without human-in-the-loop performance for result generation.

    7. The type of ground truth used:

    For method comparison, precision, and analytical sensitivity studies of an immunoassay, the "ground truth" is typically established by:

    • Reference Standards/Materials: For analytical sensitivity, known concentrations of the analyte (CA 125 antigen) would be used.
    • Predicate Device/Method: For method comparison, results from the legally marketed predicate device (Access OV Monitor Assay on the Access 2 system) would serve as the comparative ground truth.
    • Certified Reference Materials: For accuracy or calibration, materials with assigned values are used.

    The document does not explicitly state the exact type of ground truth but implies comparison to the predicate device and established analytical methods.

    8. The sample size for the training set:

    This information is not applicable. The device is an immunoassay system, not an AI/machine learning model that requires a training set in the conventional sense. The "training" of the device involves its design, calibration, and validation as an analytical system.

    9. How the ground truth for the training set was established:

    This information is not applicable for the reasons stated above.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1