Search Filters

Search Results

Found 26 results

510(k) Data Aggregation

    K Number
    K240279
    Manufacturer
    Date Cleared
    2024-05-01

    (90 days)

    Product Code
    Regulation Number
    866.5830
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    VIDAS TBI (GFAP, UCH-L1)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VIDAS® TBI (GFAP, UCH-L1) test is composed of two automated assays - VIDAS® TBI (GFAP) and VIDAS® TBI (UCH-L1) - to be used on the VIDAS® 3 instrument for the quantitative measurement of Glial Fibrillary Acidic Protein (GFAP) and Ubiquitin C-terminal Hydrolase (UCH-L1) in human serum using the ELFA (Enzyme Linked Fluorescent Assay) technique. The results of both assays are requred to obtain an overall qualitative test interpretation.

    The overall qualitative VIDAS® TBI (GFAP, UCH-L1) test result is used, in conjunction with clinical information, to aid in the evaluation of patients (18 years of age or older), presenting within 12 hours of suspected mild traumatic brain injury (Glasgow Coma Scale score 13-15), to assist in determining the need for a Computed Tomography (CT) scan of the head. A negative interpretation of VIDAS® TBI (GFAP, UCH-L1) test is associated with the absence of acute intracranial lesions visualized on a head CT scan.

    Device Description

    The VIDAS® TBI (GFAP, UCH-L1) test is composed of two automated assays – VIDAS® TBI (GFAP) and VIDAS® TBI (UCH-L1) – to be used on the VIDAS® 3 instrument. Similar to other VIDAS assays, VIDAS TBI (GFAP) and VIDAS TBI (UCH-L1) test kits (specific to each biomarker) contain the solid phase receptacles (SPRs®), the reagent strips, Product Calibrator S1 and Product Control C1. These test kits will also contain the master lot entry (MLE) data i.e., a barcode printed on the outer label of the packaging, as well as the reference number of the package insert to download from the bioMérieux website.

    Whether it be for the GFAP or UCH-L1 quantification, the test combines a three-step enzyme immunoassay sandwich method with a final fluorescent detection step, also known as enzyme-linked fluorescent assay (ELFA).

    The Solid Phase Receptacle (SPR) serves as the solid phase as well as the pipetting device. The inner surface of the SPR is coated with antibodies aqainst the substance of interest i.e., anti-GFAP or anti-UCH-L1 antibodies. The reagent strip consists of 10 wells covered with a labeled foil seal. Well 1 is designated for the sample. Eight of the wells contain sample diluent, wash buffer, conjugate, and tracer. The last well contains the fluorescent substrate. All of the assay steps are performed automatically by the instrument.

    The intensity of the fluorescence is proportional to the concentration of the analyte the sample. At the end of the assay, the biomarker concentration is automatically calculated by the instrument in relation to the calibration curve and stored in the Master Lot Entry (MLE) data.

    VIDAS TBI (GFAP) and VIDAS TBI (UCH-L1) results are reported separately: the VIDAS 3 reports the calculated concentration and the qualitative interpretation for each. The final result i.e., the patient's status in relation to suspected mild traumatic brain injury, must be interpreted by the user according to the decision tree presented in the package insert.

    AI/ML Overview

    This document describes the validation of the VIDAS® TBI (GFAP, UCH-L1) test, an automated assay for diagnosing mild traumatic brain injury. The submission compares the device to a predicate device, the BANYAN BTI™, and summarizes non-clinical and clinical testing results. The following points address the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" for each performance metric in a table format. However, it presents the results of various assays and often implies that the results "demonstrate" or "confirm" the required performance, indicating these are the achieved results compared to an internal standard or regulatory expectation. Below is a table summarizing various performance metrics and their reported results. Specific acceptance criteria values are not provided in this public summary.

    Performance MetricReported Device Performance
    Analytical Measuring Interval
    VIDAS TBI (GFAP)10.0 - 320.0 pg/mL
    VIDAS TBI (UCH-L1)80.0 - 2560.0 pg/mL
    Linearity
    VIDAS TBI (GFAP)Demonstrated on the range 6.7 - 354.5 pg/mL
    VIDAS TBI (UCH-L1)Demonstrated on the range 58.9 - 2769.1 pg/mL
    Detection Limits
    Limit of Blank (LoB) - GFAP4.4 pg/mL
    Limit of Detection (LoD) - GFAP5.4 pg/mL
    Limit of Quantitation (LoQ) - GFAP5.4 pg/mL
    Limit of Blank (LoB) - UCH-L141.8 pg/mL
    Limit of Detection (LoD) - UCH-L148.1 pg/mL
    Limit of Quantitation (LoQ) - UCH-L148.1 pg/mL
    Hook Effect
    VIDAS TBI (GFAP)No hook effect up to 200,000.0 pg/mL
    VIDAS TBI (UCH-L1)No hook effect up to 400,000.0 pg/mL
    Calibration FrequencyVerified for 56 days
    Sample StabilityVerified for specified storage conditions and freeze/thaw cycles
    Diagnostic Accuracy
    Diagnostic Sensitivity96.7%
    Diagnostic Specificity41.2%
    Positive Likelihood Ratio1.6
    Negative Likelihood Ratio0.1
    Positive Predictive Value9.9%
    Negative Predictive Value99.5%

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Test set sample size: For the diagnostic accuracy study, the sample size is not explicitly stated but refers to the "ALERT cohort." For the reference interval study, 513 apparently healthy US adult subjects were used.
    • Data provenance: The diagnostic accuracy study was performed using the "ALERT cohort." The reference interval study was conducted at three sites (one internal European site and two external US sites). It is not specified whether these studies were retrospective or prospective, though "ALERT cohort" could suggest a pre-existing dataset.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    This information is not provided in the document. The diagnostic accuracy study compares the device's results to the presence/absence of acute intracranial lesions visualized on a head CT scan, but the number or qualifications of experts interpreting these CT scans to establish ground truth are not mentioned.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is an in vitro diagnostic test for quantitative measurement of biomarkers, not an AI-assisted imaging device that impacts human reader performance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the diagnostic accuracy study presents the standalone performance of the VIDAS® TBI (GFAP, UCH-L1) assay. The results (sensitivity, specificity, etc.) are based on the device's output compared to the ground truth (CT scan findings). The device is used "in conjunction with clinical information," but the reported diagnostic accuracy figures are for the test itself.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the diagnostic accuracy study was "absence of acute intracranial lesions visualized on a head CT scan." This indicates that CT scan results were used as the reference standard for traumatic brain injury assessment.

    8. The sample size for the training set

    This document describes a diagnostic device and its validation. It does not explicitly mention a "training set" in the context of machine learning or AI models with distinct training and test phases. The "test set" for diagnostic accuracy is referred to as the "ALERT cohort." The reference interval was established using 513 apparently healthy subjects.

    9. How the ground truth for the training set was established

    As there is no explicitly defined "training set" for an AI model in this submission, the method for establishing ground truth for a training set is not applicable or described. The clinical performance data presented (Diagnostic Accuracy and Reference interval) seems to represent the evaluation of the final device.

    Ask a Question

    Ask a specific question about this device

    K Number
    K234143
    Date Cleared
    2024-03-27

    (89 days)

    Product Code
    Regulation Number
    866.5830
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    i-STAT TBI Cartridge with the i-STAT Alinity System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The i-STAT TBI test is a panel of in vitro diagnostic immunoassays for the quantitative measurements of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) in whole blood and a semi-quantitative interpretation of test results derived from these measurements, using the i-STAT Alinity instrument. The interpretation of test results is used, in conjunction with other clinical information, to aid in the evaluation of patients, 18 years of age or older, presenting with suspected mild traumatic brain injury (Glasgow Coma Scale score 13-15), which may include one of the following four clinical criteria: 1) any period of loss of consciousness, 2) any loss of memory for events immediately before and after the accident, 3) any alteration in mental state at the time of accident, and/or 4) focal neurological deficits, within 24 hours of injury, to assist in determining the need for a CT (computed tomography) scan of the head. A 'Not Elevated' test interpretation is associated with the absence of acute traumatic intracranial lesions visualized on a head CT scan.

    The test is to be used with venous whole blood collected with EDTA anticoagulant in point of care or clinical laboratory settings by a healthcare professional.

    Device Description

    The i-STAT TBI cartridge is a multiplex immunoassay that contains assays for both ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) and glial fibrillary acidic protein (GFAP). The assays test for the presence of these biomarkers in a whole blood sample and vield a semi-quantitative test interpretation based on measurements of both UCH-L1 and GFAP in approximately 15 minutes. The i-STAT TBI cartridge is designed to be run only on the i-STAT Alinity instrument.

    The i-STAT Alinity instrument is a handheld, in vitro diagnostic device. The instrument is the main user interface of the i-STAT Alinity System and functions as the electro-mechanical interface to the test cartridge. The instrument executes the test cycle, acquires and processes the electrical sensor signals converting the signals into quantitative results. These functions are controlled by a microprocessor.

    The i-STAT Alinity System is comprised of the i-STAT Alinity instrument, the i-STAT test cartridges and accessories (i-STAT Alinity Base Station, Electronic Simulator and Printer).

    Assaved quality control materials are also available for use with the i-STAT TBI cartridge and include i-STAT TBI Control Level 1, i-STAT TBI Control Level 2, and the i-STAT TBI Calibration Verification Levels 1-3.

    The i-STAT TBI Controls are available to monitor the performance of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) assays on the i-STAT Alinity instrument.

    The i-STAT TBI Calibration Verification Materials are available to verify the calibration of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) assays throughout the reportable range on the i-STAT Alinity instrument.

    AI/ML Overview

    The provided text describes the analytical and clinical performance of the i-STAT TBI cartridge with the i-STAT Alinity System, which measures GFAP and UCH-L1 to aid in the evaluation of patients with suspected mild traumatic brain injury (TBI). The information is presented to support a 510(k) premarket notification for substantial equivalence to a predicate device.

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided document:

    1. A Table of Acceptance Criteria (Implied) and Reported Device Performance

    The document does not explicitly present a "table of acceptance criteria" with predefined thresholds. Instead, it describes performance characteristics that are presumably deemed acceptable for demonstrating substantial equivalence. The core clinical performance criterion for this device, a TBI assessment test, is its ability to correctly identify patients not needing a head CT scan, which translates to high sensitivity and negative predictive value (NPV) for the absence of acute intracranial lesions.

    Here's a summary of the reported core performance:

    Performance MetricReported Device Performance (i-STAT TBI cartridge with i-STAT Alinity System)
    Clinical Sensitivity (for acute traumatic intracranial lesions)96.5% (273/283) [95% CI: 93.6%, 98.1%]
    Clinical Specificity (for absence of acute traumatic intracranial lesions)40.3% (277/687) [95% CI: 36.7%, 44.0%]
    Negative Predictive Value (NPV)96.5% (277/287) [95% CI: 93.7%, 98.1%]
    Adjusted NPV at 6% prevalence99.4% [95% CI: 99.0%, 99.7%]
    Positive Predictive Value (PPV)40.0% (273/683) [95% CI: 38.4%, 41.5%]
    False Negative Rate3.5% (10/283)

    Key Implied Acceptance Criteria based on Regulatory Context:

    • High Clinical Sensitivity: The device must reliably identify patients with acute intracranial lesions, minimizing false negatives to ensure patient safety and avoid missing critical injuries. A 96.5% sensitivity is presented as acceptable.
    • High Negative Predictive Value (NPV): Crucially, the device's main utility is to aid in determining the need for a CT scan. A high NPV means that a "Not Elevated" result reliably indicates the absence of acute traumatic intracranial lesions. The 96.5% NPV (and higher adjusted NPV) supports this.
    • Acceptable False Negative Rate: The reported 3.5% false negative rate, with the additional detail that "None of these ten (10) subjects with false negative results required surgical intervention related to their head injury as no neurosurgical lesions were identified by CT scan in these subjects," addresses a critical safety aspect.
    • Analytical Performance: The document provides extensive data on analytical precision (semi-quantitative and qualitative, 20-day and multi-site), linearity, hook effect, traceability, reference interval, detection limit, analytical specificity (interference, cross-reactivity, cross-talk), and hematocrit sensitivity. These are all standard analytical performance characteristics that would need to meet predefined criteria (often internal to the manufacturer or based on regulatory guidance) to ensure the assay's reliability and robustness. While specific numerical acceptance criteria for each are not stated (e.g., "CV must be
    Ask a Question

    Ask a specific question about this device

    K Number
    K232669
    Device Name
    TBI
    Date Cleared
    2023-09-29

    (28 days)

    Product Code
    Regulation Number
    866.5830
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    TBI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TBI test is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays (CMIA) used for the quantitative measurements of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) in human plasma and serum and provides a semi-quantitative interpretation of test results derived from these measurements using the ARCHITECT i1000SR System.

    The interpretation of test results is used, in conjunction with other clinical information, to aid in the evaluation of patients, 18 years of age or older, presenting with suspected mild traumatic brain injury (Glasgow Coma Scale score 13-15) within 12 hours of injury, to assist in determining the need for a CT (computed tomography) scan of the head. A negative test result is associated with the absence of acute intracranial lesions visualized on a head CT scan.

    The TBI test is intended for use in clinical laboratory settings by healthcare professionals.

    Device Description

    The TBI test is a panel of in vitro diagnostic quantitative measurements of GFAP and UCH-L1 and provides a semi-quantitative interpretation of GFAP and UCH-L1 in human plasma and serum.

    GFAP: This assay is an automated, two-step immunoassay for the quantitative measurement of GFAP in human plasma and serum using chemiluminescent microparticle immunoassay (CMIA) technology.

    UCH-L1: This assay is an automated, two-step immunoassay for the quantitative measurement of UCH-L1 in human plasma and serum using CMIA technology.

    Interpretation of Results: The assay cutoffs were established to be 35.0 pg/mL (35.0 ng/L) for GFAP and 400.0 pg/mL (400.0 ng/L) for UCH-L1. The GFAP and UCH-L1 results are reported separately and the software provides a TBI interpretation relative to the respective cutoff values.

    AI/ML Overview

    The provided text describes the TBI (Traumatic Brain Injury) test, an in vitro diagnostic device, and its performance evaluation for the ARCHITECT i1000SR system. The submission is a 510(k) for substantial equivalence to a predicate device (TBI on the Alinity i system).

    Here's an analysis of the acceptance criteria and study as per your request, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a "table of acceptance criteria" in the format of specific thresholds for the performance metrics. Instead, it states that the device "met the pre-defined product requirements for all characteristics evaluated in the verification studies." The performance metrics reported are for precision (20-Day and Reproducibility), Limits of Blank (LoB), Detection (LoD), Quantitation (LoQ), and Linearity, along with a comparison summary using Passing-Bablok regression against the predicate device.

    Table of Reported Device Performance (Implied Acceptance through Meeting Requirements):

    Performance MetricReported Device Performance (TBI on ARCHITECT i1000SR)
    GFAP 20-Day Precision2.2 to 6.2 %CV for samples with GFAP concentrations from 20.4 to 37,098.8 pg/mL
    UCH-L1 20-Day Precision2.2 to 4.5 %CV for samples with UCH-L1 concentrations from 187.6 to 19,645.0 pg/mL
    GFAP Reproducibility2.7 to 6.0 %CV for samples with GFAP concentrations from 23.6 to 34,087.5 pg/mL; 1.30 pg/mL SD for sample with GFAP concentration 19.1 pg/mL
    UCH-L1 Reproducibility2.4 to 3.9 %CV for samples with UCH-L1 concentrations from 193.0 to 20,363.2 pg/mL
    GFAP LoB2.0 pg/mL
    GFAP LoD3.2 pg/mL
    GFAP LoQ6.1 pg/mL
    UCH-L1 LoB9.2 pg/mL
    UCH-L1 LoD18.3 pg/mL
    UCH-L1 LoQ26.3 pg/mL
    GFAP Linearity6.1 to 42,000.0 pg/mL
    UCH-L1 Linearity26.3 to 25,000.0 pg/mL
    Sample Onboard Stability2 hours
    Reagent Onboard/Calibration Curve Storage Stability30 days
    Comparison to Predicate (GFAP)N=123, R=1.00 (95% CI: 1.00, 1.00), Intercept: -0.6 (95% CI: -1.1, -0.3), Slope: 1.03 (95% CI: 1.02, 1.05)
    Comparison to Predicate (UCH-L1)N=123, R=1.00 (95% CI: 1.00, 1.00), Intercept: -6.0 (95% CI: -7.9, -4.0), Slope: 1.06 (95% CI: 1.05, 1.07)

    Study Details:

    1. Sample sizes used for the test set and the data provenance:

      • Test Set (Method Comparison): N=123 for both GFAP and UCH-L1 assays in the comparison study against the predicate device.
      • Data Provenance: The document does not specify the country of origin of the data or whether the data was retrospective or prospective. It refers to "verification studies" and "studies were performed based on guidance from CLSI EP09c, 3rd ed." These are typically laboratory-based analytical performance studies. The clinical utility of the test (used to aid in evaluation of patients with suspected mild TBI to determine need for CT scan) suggests that patient samples were likely used for the comparison study, but details about their collection (retrospective/prospective, patient demographics, clinical context) are not provided in this summary.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not applicable and not provided in the document. The TBI test is an in vitro diagnostic (IVD) quantitative measurement of biomarkers (GFAP and UCH-L1). The "ground truth" for its performance is established by comparison to a legally marketed predicate device (K223602, TBI for Alinity i) and internal analytical performance studies using known concentrations or reference methods. The "interpretation of test results" for the TBI test (positive/negative) is based on established cutoff values for GFAP and UCH-L1, which are compared to CT scan results (absence of acute intracranial lesions). There is no mention of human experts directly establishing "ground truth" for the device's output itself in this context.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. This is an IVD device measuring biomarkers. Adjudication methods like 2+1 or 3+1 are typically used in image-based diagnostic studies where human readers interpret images, and consensus is sometimes needed to establish ground truth or resolve discrepancies.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. This is an in vitro diagnostic (IVD) test, not an AI-assisted imaging device that impacts human reader performance. Therefore, an MRMC study and effect size on human readers are not applicable.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, in a sense. The TBI test is a standalone device (a panel of immunoassays interpreted by defined cutoffs). Its performance is evaluated analytically (precision, linearity, LoD/LoQ) and by direct comparison of its measurements to those of a predicate device, which is also a standalone IVD. The interpretation of the test results (positive/negative) is an automated process based on the measured biomarker levels and predefined cutoffs. While the "interpretation of test results is used, in conjunction with other clinical information, to aid in the evaluation of patients... to assist in determining the need for a CT (computed tomography) scan," the device itself provides the result as an algorithm-driven interpretation (based on raw measurement data).
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the quantitative measurements (GFAP, UCH-L1), the "ground truth" in the comparative study is the performance of the legally marketed predicate device (TBI on Alinity i system, K223602). Analytical performance metrics (LoB, LoD, LoQ, linearity, precision) are established using reference materials or samples with known or characterized concentrations.
      • For the clinical context of determining the "need for a CT scan of the head," the ground truth stated for a negative test result is its association with "the absence of acute intracranial lesions visualized on a head CT scan." This implies that CT scan findings serve as the clinical ground truth for evaluating the negative predictive value of the test, though this specific performance characteristic is not detailed in the provided summary. For the positive result, the test aids in determining the need for a CT scan, but the summary doesn't explicitly state the ground truth for a positive result (e.g., presence of lesions, clinical outcome).
    7. The sample size for the training set:

      • No information about a "training set" is provided. This is an IVD device based on established immunoassay technology and predefined cutoffs, not a machine learning or AI algorithm that typically requires a distinct training phase with labeled data. The cutoffs (35.0 pg/mL for GFAP and 400.0 pg/mL for UCH-L1) are stated as "established," but the method and data used for their establishment are not described in this summary.
    8. How the ground truth for the training set was established:

      • Not applicable as no "training set" is mentioned in the context of this 510(k) summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    K223602
    Date Cleared
    2023-03-02

    (90 days)

    Product Code
    Regulation Number
    866.5830
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    Traumatic brain injury (TBI) test

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TBI test is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays (CMIA) used for the quantitative measurements of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) in human plasma and serum and provides a semi-quantitative interpretation of test results derived from these measurements using the Alinity i system.

    The interpretation of test results is used, in conjunction with other clinical information, to aid in the evaluation of patients, 18 years of age or older, presenting with suspected mild traumatic brain injury (Glasgow Coma Scale score 13-15) within 12 hours of injury, to assist in determining the need for a CT (computed tomography) scan of the head. A negative test result is associated with the absence of acute intracranial lesions visualized on a head CT scan.

    The TBI test is intended for use in clinical laboratory settings by healthcare professionals.

    Device Description

    The TBI test is a panel of in vitro diagnostic quantitative measurements of GFAP and UCH-L1 and provides a semi-quantitative interpretation of GFAP and UCH-L1 in human plasma and serum.

    The GFAP assay (subject device) is an automated immunoassay for the quantitative measurement of GFAP in plasma and serum using chemiluminescent microparticle immunoassay (CMIA) technology on the Alinity i system.

    The UCH-L1 assay (subject device) is an automated immunoassay for the quantitative measurement of UCH-L1 in plasma and serum using chemiluminescent microparticle immunoassay (CMIA) technology on the Alinity i system.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for the Abbott Laboratories TBI test:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the TBI test are implicitly defined by the clinical performance observed in the pivotal study, particularly regarding its ability to aid in determining the need for a head CT scan in patients with suspected mild TBI. The key performance metrics are Sensitivity and Negative Predictive Value (NPV) as these are critical for a rule-out test for a serious condition like intracranial lesions.

    Acceptance Criteria CategoryMetric (Target/Requirement)Reported Device Performance (Pivotal Study - Archived Samples)Reported Device Performance (Supplemental Study - Fresh Samples)
    Clinical Performance (Rule-Out Test)High Sensitivity (to minimize false negatives for acute intracranial lesions)96.7% (95% CI: 91.7%, 98.7%)100.0% (95% CI: 78.5%, 100.0%)
    High Negative Predictive Value (NPV) (to ensure negative results reliably indicate absence of acute intracranial lesions)99.4% (95% CI: 98.6%, 99.8%)100.0% (95% CI: 85.7%, 100.0%)
    Adjusted NPV (for 6% CT scan positive prevalence)99.5% (95% CI: 98.6%, 99.8%)99.2% (95% CI: 89.1%, 99.9%)
    Other Clinical MetricsSpecificity (percentage of true negatives)40.1% (95% CI: 37.8%, 42.4%)27.7% (95% CI: 19.2%, 38.2%)
    Positive Predictive Value (PPV)9.8% (95% CI: 8.2%, 11.6%)18.9% (95% CI: 11.6%, 29.3%)
    Adjusted PPV (for 6% CT scan positive prevalence)9.3% (95% CI: 8.9%, 9.8%)8.1% (95% CI: 7.2, 9.1%)
    Likelihood Ratio Negative (LR-)0.08 (95% CI: 0.03, 0.22)0.12 (95% CI: 0.01, 1.91)
    Likelihood Ratio Positive (LR+)1.61 (95% CI: 1.53, 1.70)1.38 (95% CI: 1.21, 1.58)
    Analytical PerformanceLimit of Quantitation (LoQ) for GFAP and UCH-L1 must be suitable for clinical application.GFAP: 6.1 pg/mL; UCH-L1: 26.3 pg/mLN/A (Analytical performance is consistent across sample types)
    Linearity across the analytical measuring interval.GFAP: 6.1 - 42,000.0 pg/mL; UCH-L1: 26.3 - 25,000.0 pg/mLN/A
    Overall Within-Laboratory Precision (for GFAP and UCH-L1)GFAP CV
    Ask a Question

    Ask a specific question about this device

    K Number
    K213730
    Date Cleared
    2022-04-21

    (146 days)

    Product Code
    Regulation Number
    870.2780
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    MESI mTABLET TBI diagnostic system, MESI mTABLET TBI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K201778
    Date Cleared
    2021-01-08

    (192 days)

    Product Code
    Regulation Number
    866.5830
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    i-STAT TBI Plasma cartridge with the i-STAT Alinity System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The i-STAT TBI Plasma test is a panel of in vitro diagnostic immunoassays for the quantitative measurements of glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) in plasma and a semiquantitative interpretation of test results derived from these measurements, using the i-STAT Alinity Instrument. The interpretation of test results is used, in conjunction with other clinical information, to aid in the evaluation of patients, 18 years of age or older, presenting with suspected mild traumatic brain injury (Glasgow Coma Scale score 13-15) within 12 hours of injury, to assist in determining the need for a CT (computed tomography) scan of the head. A 'Not Elevated' test interpretation is associated with the absence of acute traumatic intracranial lesions visualized on a head CT scan.

    The test is to be used with plasma prepared from EDTA anticoagulated specimens in clinical laboratory settings by a healthcare professional. The i-STAT TBI Plasma test is not intended to be used in point of care settings.

    Device Description

    The i-STAT TBI Plasma cartridge is a multiplex immunoassay that contains assays for both ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) and glial fibrillary acidic protein (GFAP). The assays test for the presence of these biomarkers in a plasma sample and yield a semi-quantitative test interpretation based on measurements of both UCH-L1 and GFAP in approximately 15 minutes. The i-STAT TBI Plasma cartridge is designed to be run only on the i-STAT Alinity instrument.

    The i-STAT Alinity instrument is a handheld, in vitro diagnostic device designed to run only i-STAT test cartridges. The instrument is the main user interface of the i-STAT System and functions as the electro-mechanical interface to the test cartridge. The instrument executes the test cycle, acquires and processes the electrical sensor signals converting the signals into quantitative results. These functions are controlled by a microprocessor.

    The i-STAT Alinity System is comprised of the i-STAT Alinity instrument, the i-STAT test cartridges and accessories (i-STAT Alinity Base Station, Electronic Simulator and Printer).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the i-STAT TBI Plasma cartridge with the i-STAT Alinity System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria in a table format. However, it presents clinical performance parameters for sensitivity, specificity, and negative predictive value (NPV). The implied "acceptance criteria" are derived from comparison to the predicate device and the clinical utility for reducing unnecessary CT scans.

    Performance ParameterAcceptance Criteria (Implied)Reported Device Performance (Pivotal Study, N=1901)Reported Device Performance (Supplemental Fresh Specimen Study, N=88)
    Clinical SensitivityComparable to predicate device and high enough to identify true positive cases of intracranial lesions.95.8% (95% CI: 90.6%, 98.2%)100.0% (95% CI: 88.3%, 100.0%)
    Clinical SpecificityComparable to predicate device and sufficient to potentially reduce unnecessary CT scans.40.4% (95% CI: 38.2%, 42.7%)23.7% (95% CI: 14.7%, 36.0%)
    Negative Predictive Value (NPV)High enough to confidently rule out the absence of acute traumatic intracranial lesions when the test is 'Not Elevated'.99.3% (95% CI: 98.5%, 99.7%)100.0% (95% CI: 80.2%, 100.0%) (Adjusted NPV at 6% prevalence: 100.0% (95% CI: 96.9%, 100.0%))
    False Negative RateLow, especially for lesions requiring surgical intervention.4.2% (5/120). No FN for surgical intervention cases.0% (0/29)
    False Positive RateTolerable given the clinical benefit of potentially reducing unnecessary CT scans.59.6% (1061/1781)76.2% (45/59)

    Note: The document explicitly states that the device was deemed "substantially equivalent" to the predicate, and a "benefit-risk assessment was performed," suggesting that the performance metrics achieved were considered acceptable for its intended use and comparative to the existing predicate.


    2. Sample Size Used for the Test Set and Data Provenance

    • Pivotal Study:
      • Sample Size: 1901 subjects (120 with positive CT, 1781 with negative CT).
      • Data Provenance: Prospectively collected and archived (frozen) plasma specimens. Subjects enrolled at 22 clinical sites in three countries: United States, Germany, and Hungary.
    • Supplemental Fresh Specimen Study:
      • Sample Size: 88 subjects (29 with positive CT, 59 with negative CT).
      • Data Provenance: Freshly collected plasma specimens. Subjects enrolled across 4 clinical sites of the TRACK-TBI study in the United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: At least two neuroradiologists, with a third neuroradiologist for adjudication if necessary.
    • Qualifications of Experts: Neuroradiologists (specific years of experience or subspecialty certification not detailed, but implied by the term "neuroradiologist").

    4. Adjudication Method for the Test Set

    The adjudication method used was consensus interpretation between two neuroradiologists, with adjudication by a third neuroradiologist if necessary. This is commonly referred to as a "2+1" or "multiple reader, with adjudication" method.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not described in this document. The clinical studies focused on the standalone diagnostic performance of the device itself (i-STAT TBI Plasma test interpretation) against a CT scan ground truth, not on evaluating human reader performance with or without the device. The device's output is an "interpretation of test results...to aid in the evaluation of patients...to assist in determining the need for a CT scan," suggesting it's designed to be used by a healthcare professional as an aid, but the study design doesn't directly measure the improvement of human readers through its use.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the clinical studies presented here (Pivotal and Supplemental) are effectively standalone performance studies for the i-STAT TBI Plasma test. The results (sensitivity, specificity, NPV) are reported for the device's interpretation ("Elevated" or "Not Elevated") directly against the CT scan ground truth, without measuring the impact of a human healthcare professional's subsequent decision-making. The device provides "a semiquantitative interpretation of test results derived from these measurements," which is then used "in conjunction with other clinical information, to aid in the evaluation of patients...to assist in determining the need for a CT scan." So, while it's an aid to a human, the performance metrics reported are for the device's output itself.


    7. The Type of Ground Truth Used

    The primary ground truth used for the clinical studies was the presence or absence of acute traumatic intracranial lesions visualized on a head CT (Computed Tomography) scan. This ground truth was established by consensus interpretation of neuroradiologists.


    8. The Sample Size for the Training Set

    • Assay Cutoff Determination: A training set of 420 subjects (274 males and 146 females) with suspected mild TBI was used to determine the assay cutoffs for GFAP and UCH-L1.

    9. How the Ground Truth for the Training Set Was Established

    For the 420 subjects in the training set used to establish assay cutoffs:

    • Subjects had suspected mild traumatic brain injury (Glasgow Coma Scale score of 13-15).
    • Blood was drawn within 12 hours of injury.
    • A head CT scan determination was performed.
    • The ground truth would have been established by the head CT scan results (presence or absence of acute traumatic intracranial lesions), similar to the clinical study's ground truth, though the specific process of expert review for these 420 cases is not detailed beyond "head CT scan determination." It's reasonable to infer a similar process of expert radiologist interpretation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K190815
    Device Name
    BrainScope TBI
    Date Cleared
    2019-09-11

    (166 days)

    Product Code
    Regulation Number
    882.1450
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BrainScope TBI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BrainScope TBI is a multi-modal, multi-parameter assessment indicated for use as an adjunct to standard clinical practice to aid in the evaluation of patients who have sustained a closed head injury, and have a Glasgow Coma Scale (GCS) score of 13-15 (including patients with concussion/mild traumatic brain injury (mTBI)).

    BrainScope TBI provides a multi-parameter measure (CI)) to aid in the evaluation of concussion in patients between the ages of 13-25 years who present with a GCS score of 15 following a head injury within the past 72 hours (3 days), in conjunction with a standard neurological assessment of concussion. The CI is computed from a multivariate algorithm based on the patient's electroencephalogram (EEG), augmented by neurocognitive measures and selected clinical symptoms.

    The BrainScope TBI Structural Injury Classification ("SIC") uses brain electrical activity (EEG) to determine the likelihood of structural brain injury visible on head CT for patients between the ages of 18-85 years (have a GCS score of 13 – 15), have sustained a closed head injury within the past 72 hours (3 days) who are being considered for a head CT. BrainScope TBI should not be used as a substitute for a CT scan. Negative likely corresponds to those with no structural brain injury visible on head CT. Positive likely corresponds to those with a structural brain injury visible on head CT. Equivocal may correspond to structural brain injury visible on head CT or may indicate the need for further observation or evaluation.

    BrainScope TBI provides a measure of brain Function Index, (BFI)) for the statistical evaluation of the human electroencephalogram (EEG), aiding in the evaluation of head injury as part of a multi-modal, multi-parameter assessment, in patients 18-85 years of age (have a GCS score of 13 - 15) who have sustained a closed head injury within the past 72 hours (3 days).

    The BrainScope TBI device is intended to record, measure, analyze, and display brain electrical activity utilizing the calculation of standard quantitative EEG (QEEG) parameters from frontal locations on a patient's forehead. The BrainScope TBI calculates and displays raw measures for the following standard QEEG measures: Absolute and Relative Power, Asymmetry, Coherence and Fractal Dimension. These raw measures are intended to be used for post hoc analysis of EEG signals for interpretation by a qualified user.

    BrainScope TBI also provides clinicians with quantitative measures of cognitive performance in patients 13-85 years of age to aid in the assessment of an individual's level of cognitive function. These measures interact with the CI and can be used stand alone.

    BrainScope TBI also stores and displays electronic versions of standardized clinical assessment tools that should be used in accordance with the assessment tools' general instructions. These tools do not interact with any other device measures, and are stand alone.

    Device Description

    BrainScope TBI (model: Ahead 500) is a portable, non-invasive, non-radiation emitting, point of care device intended to provide results and measures to support clinical assessments and aid in the diagnosis of concussion / mild traumatic brain injury (mTBI). The BrainScope TBI includes a new multivariate classification algorithm that analyzes a patient's electroencephalogram (EEG), augmented by neurocognitive performance and selected clinical symptoms to compute a multi-modal index called the Concussion Index (CI). BrainScope TBI provides the healthcare provider with a multi-parameter measure to aid in the evaluation of concussion following a head injury within the past 72 hours (3 days). The BrainScope TBI (Ahead 500) retains all the capabilities of the predicate (BrainScope TBI, model: Ahead 400) including the Structural Injury Classification (SIC) and the Brain Function Index (BFI). It also contains configurable, selectable computerized cognitive performance tests and digitized standard clinical assessment tools intended to provide a multi-modal panel of measures to support the clinical assessment of concussion / mTBI.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the BrainScope TBI (model: Ahead 500) device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Performance Goal)Reported Device Performance (95% CI)
    Sensitivity0.690.8599 (0.8050, 0.9041)
    Specificity0.5650.7078 (0.6588, 0.7535)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 580 subjects
      • 229 matched controls
      • 144 healthy volunteers
      • 207 subjects who sustained closed head injury and were removed from play
    • Data Provenance: The study was conducted across 10 US clinical sites, including High Schools, Colleges, and Concussion Clinics. The study design appears to be prospective, given it involved testing subjects at different time points and with specific inclusion/exclusion criteria.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used or their specific qualifications (e.g., "Radiologist with 10 years of experience") for establishing the ground truth.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, the document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to assess how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the Concussion Index (CI) algorithm.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was done for the Concussion Index (CI). The reported sensitivity and specificity values are for the algorithm's performance in classifying concussions.

    7. Type of Ground Truth Used

    The clinical reference standard (ground truth) incorporated elements from guidelines published in the International Conference on Concussion in Sport (McCrory 2017; 2013) as well as the National Collegiate Athletic Association (NCAA) concussion policy. This suggests a clinical diagnosis/consensus-based ground truth, likely established by clinicians based on established guidelines and possibly direct observations or outcomes related to concussion (e.g., "removed from play"). It's not explicitly stated to be solely pathology or patient outcomes data, but rather a combination of clinical criteria.

    8. Sample Size for the Training Set

    The document states that the "cutoff (threshold) CI [was] derived from an algorithm development study that was independent of the validation study," but it does not provide the sample size for this algorithm development (training) study.

    9. How the Ground Truth for the Training Set Was Established

    The document implies that the ground truth for the "algorithm development study" (training set) would have been established using similar clinical criteria as the validation study, i.e., "consistent with similar changes seen in subjects with concussion," incorporating elements from the International Conference on Concussion in Sport guidelines and NCAA concussion policy. However, it does not explicitly detail the process for establishing ground truth for the training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K190970
    Date Cleared
    2019-08-13

    (123 days)

    Product Code
    Regulation Number
    888.3040
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PROSTEP™ TBI™ (Tailors Bunion Implant) System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PROSTEP™ TBI™ (Tailor's Bunion Implant) system is indicated for the fixation of 5th metatarsal osteotomies made in the correction of Tailor's Bunion.

    Device Description

    The PROSTEP™ TBI™ (Tailors Bunion Implant) System is intended for use in bone reconstruction and osteotomy of the fifth metatarsal. The implants are provided sterile and consist of one MIS Bunion implant and one ORTHOLOC 3Di screw. Based on patient anatomy and surgeon's needs, different component sizes can be selected.

    AI/ML Overview

    This is a 510(k) premarket notification for a medical device (PROSTEP™ TBI™ System), which typically establishes substantial equivalence to a predicate device rather than conducting extensive clinical studies with acceptance criteria and performance metrics in the same way a de novo or PMA submission might. Therefore, many of the requested categories related to clinical performance and AI algorithm evaluation may not be directly applicable or explicitly detailed in this type of submission.

    Based on the provided document, here's an analysis of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The document describes non-clinical evidence (construct fatigue testing and bacterial endotoxin testing) to support substantial equivalence. It does not provide specific acceptance criteria or reported performance in a table format for clinical device performance in terms of diagnostic accuracy or reader improvement metrics, as the device is an implant and not an AI-driven diagnostic tool.

    For the non-clinical testing, the document states: "The subject was evaluated to the predicate through construct fatigue testing to support the safety and effectiveness of the subject device system. Additionally, bacterial endotoxin testing was done on a representative part." While these tests would have internal acceptance criteria (e.g., fatigue cycles survived, endotoxin limits), those specific criteria and detailed results are not provided in this summary. The conclusion is simply that the testing "shows no new worst case" and supports substantial equivalence.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    Not applicable. This is not an AI diagnostic device, so there is no "test set" of clinical images or patient data in the typical sense for evaluating algorithm performance. The "testing" mentioned is mechanical (fatigue) and biocompatibility (endotoxin) on the device itself.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as there is no test set for clinical performance evaluation requiring expert ground truth in this context.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC study was not done. This is not an AI diagnostic device. The submission explicitly states under "SUBSTANTIAL EQUIVALENCE - CLINICAL EVIDENCE": "N/A."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No, a standalone algorithm performance study was not done. This is not an AI diagnostic device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable for clinical performance. For the described non-clinical testing, the "ground truth" would be established engineering standards and biocompatibility requirements.

    8. The sample size for the training set

    Not applicable. This is not an AI device, so there is no training set for an algorithm.

    9. How the ground truth for the training set was established

    Not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K190807
    Date Cleared
    2019-04-26

    (28 days)

    Product Code
    Regulation Number
    862.1110
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    VITROS XT Chemistry Products TBIL-ALKP Slides

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rx Only For in vitro diagnostic use only

    The TBIL test within the VITROS XT Chemistry Products TBIL-ALKP Slides quantitatively measure total bilirubin (TBIL) concentration in serum and plasma using VITROS XT 7600 Integrated Systems. Measurements of the levels of bilirubin, an organic compound formed during the normal destruction of red blood cells, are used in the diagnosis and treatment of liver, hematological and metabolic disorders, including hepatitis and gall bladder block.

    The ALKP test within the VITROS XT Chemistry Products TBIL-ALKP Slides quantitatively measure alkaline phosphatase (ALKP) activity in serum and plasma using VITROS XT 7600 Integrated Systems. Measurements of alkaline phosphatase or its isoenzymes are used in the diagnosis and treatment of liver, bone, parathyroid, and intestinal diseases.

    Device Description

    Not Found

    AI/ML Overview

    This is an FDA 510(k) clearance letter for an in vitro diagnostic (IVD) device, specifically for VITROS XT Chemistry Products TBIL-ALKP Slides. The provided text is a regulatory communication and does not contain the acceptance criteria or study details for the device's performance.

    To answer your request, I would need access to the actual 510(k) summary, often referred to as a "510(k) Premarket Notification." This document typically includes the performance data, acceptance criteria, and study designs to demonstrate substantial equivalence to a predicate device.

    The information you've provided only states:

    • Device Name: VITROS XT Chemistry Products TBIL-ALKP Slides
    • Intended Use: Quantitative measurement of total bilirubin (TBIL) and alkaline phosphatase (ALKP) in serum and plasma using VITROS XT 7600 Integrated Systems.
    • Regulatory Class: Class II
    • Product Code: CIG (Bilirubin (total or direct) test system), CJE (Alkaline phosphatase test system)
    • Date of Clearance: April 26, 2019

    Without the 510(k) summary or a similar technical document, I cannot extract the specific acceptance criteria, study details, sample sizes, or ground truth information you've requested.

    Therefore, I cannot provide the requested table and study details based solely on the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183241
    Date Cleared
    2019-02-19

    (90 days)

    Product Code
    Regulation Number
    882.1450
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BrainScope TBI (Model: Ahead 400)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BrainScope TBI is a multi-modal, multi-parameter assessment indicated for use as an adjunct to standard clinical practice to aid in the evaluation of patients who have sustained a closed head injury within the past 72 hours (3 days), are between the ages of 18-85 years, have a Glasgow Coma Scale (GCS) score of 13-15 (including patients with concussion / mild traumatic brain injury (mTBI), and are being considered for a head CT. BrainScope TBI should not be used as a substitute for a CT scan.

    The BrainScope TBI Structural Injury Classification ("SIC") uses brain electrical activity to determine the likelihood of structural brain injury visible on head CT. Negative likely corresponds to those with no structural brain injury visible on head CT. Positive likely corresponds to those with a structural brain injury visible on head CT. Equivocal may correspond to structural brain injury visible on head CT or may indicate the need for further observation or evaluation.

    BrainScope TBI provides a measure of brain function Index, (BFI)) for the statistical evaluation of the human electroencephalogram (EEG), aiding in the evaluation of head injury as part of a multi-modal, multi-parameter assessment.

    The BrainScope TBI device is intended to record, measure, analyze, and display brain electrical activity utilizing the calculation of standard quantitative EEG (QEEG) parameters from frontal locations on a patient's forehead. The BrainScope TBI calculates and displays raw measures for the following standard QEEG measures: Absolute and Relative Power, Asymmetry, Coherence and Fractal Dimension. These raw measures are intended to be used for post hoc analysis of EEG signals for interpretation by a qualified user.

    BrainScope TBI also provides clinicians with quantitative measures of cognitive performance to aid in the assessment of an individual's level of cognitive function. These measures do not interact with any other device measures, and are stand alone.

    BrainScope TBI also stores and displays electronic of standardized clinical assessment tools that should be used in accordance with the assessment tools' general instructions. These tools do not interact with any other device measures, and are stand alone.

    Device Description

    BrainScope TBI is a portable, non-invasive, non-radiation emitting, point of care device intended to provide results and measures to support clinical assessments and aid in the diagnosis of mild traumatic brain injury (mTBI). It also contains configurable, selectable computerized cognitive performance tests and digitized standard clinical assessment intended to provide a multi-modal panel of measures to support the clinical assessment of concussion / mTBI. BrainScope TBI provides healthcare professionals with a set of validated and clinically accepted library of concussion / mTBI assessments.

    AI/ML Overview

    1. Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state formal acceptance criteria with numerical thresholds. Instead, it focuses on demonstrating substantial equivalence to predicate devices. The performance data presented primarily confirms that the new modifications (cognitive performance tests, PECARN, wireless connectivity) work as specified and do not negatively impact existing functionalities.

    Table of Acceptance Criteria (Inferred from Substantial Equivalence Claim) and Reported Device Performance:

    Acceptance Criteria (Inferred/Implicit)Reported Device Performance
    For Cognitive Performance Tests:
    Functionality of new cognitive testsNormative data was collected from 707 healthy individuals (age 13-85) to construct databases for the cognitive tests. Test data demonstrated that the modifications (additional cognitive performance tests and PECARN) were implemented as per specifications. The new tests and Reliable Change Index (RCI) output are "well accepted in clinical practice for assessment of Adult and Adolescent patient population."
    For Standard Clinical Assessments:
    Integration of PECARNTest data demonstrated that the modifications (additional cognitive performance tests and PECARN) were implemented as per specifications. PECARN was added to the existing library of digitized standard clinical assessments. The expanded availability of clinical assessment tools does not affect safety and effectiveness and increases utility.
    For Wireless Connectivity (OTA):
    Functionality of wireless connectivityTest data demonstrated that the modifications (wireless connectivity) were implemented as per specifications. The BrainScope TBI has wireless connectivity to accept Over the Air (OTA) software upgrades. This provides additional data transfer capabilities.
    For Existing EEG Algorithms (SIC, BFI, QEEG):
    No impact on existing functionalityThe new modifications "did not impact existing device functionality including core EEG based algorithms" (e.g., Structural Injury Classification (SIC) and Brain Function Index (BFI)). The device maintains the same technical characteristics as the predicate for EEG parameters (e.g., bandwidth, CMRR, noise floor, ADC resolution, sampling rate, electrode placement, electrode positions, electrode material, real-time EEG display, EEG-based classification algorithm).
    Basic Safety and EMC Standards:
    Conformity to relevant standardsThe BrainScope TBI device conforms to "all same basic safety and EMC standards as the predicate." It was also tested to the most recent recognized consensus standard for EMC (IEC 60601-1-2 Ed. 4.0 2014) and other listed standards (e.g., IEC 60601-1/A1:2012, IEC 60601-1-6/A1:2013, IEC 60601-2-26:2012, ANSI/AAMI EC12:2000/(R)2010, ANSI/AAMI/ISO 10993-1:2009, ANSI/AAMI/ISO 10993-5:2009, ANSI/AAMI/ISO 10993-10:2010, MIL-STD-810G, IEC 60529 (2004), ASTM D4169 09).
    Overall Safety and Effectiveness:
    Comparable to predicate devicesPerformance data demonstrated that the BrainScope TBI is "as safe and effective as the predicates."

    2. Sample size used for the test set and the data provenance:

    • Sample Size: 707 healthy individuals.
    • Data Provenance: The document states that normative data was collected from "707 healthy individuals" for the cognitive tests. It does not specify the country of origin, but given the context of FDA submission, it is typically expected to be from a US-based population or a population generalizable to the US. It is a prospective collection for establishing normative data for the new cognitive tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    The document does not explicitly state the number or qualifications of experts used to establish ground truth for the test set of the new cognitive performance tests. It refers to the new tests and RCI output being "well accepted in clinical practice," which implies expert consensus in the field, but no specific quantification of experts is provided within this document. For the EEG-based Structural Injury Classification (SIC), the ground truth is based on the visible injury on head CT, which is a clinical standard.

    4. Adjudication method for the test set:

    Not explicitly stated for the cognitive performance tests or the overall assessment of device modifications. Given the nature of normative data collection for cognitive tests, the "ground truth" is often statistical (e.g., population averages, standard deviations) rather than expert adjudication on individual cases for classification. For the SIC, the ground truth is based on head CT findings, which are objective imaging results.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document does not mention an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The device is described as an "adjunct to standard clinical practice" and "should not be used as a substitute for a CT scan," suggesting it provides additional information rather than directly assisting in CT interpretation by a reader.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, the Structural Injury Classification (SIC) and Brain Function Index (BFI) are described as "EEG based algorithms" which "use brain electrical activity to determine the likelihood of structural brain injury visible on head CT." This implies a standalone algorithmic assessment of the EEG data, providing "Negative, Equivocal and Positive outputs" for SIC and a "statistical evaluation of the human EEG" for BFI.

    7. The type of ground truth used:

    • For Structural Injury Classification (SIC): Visible structural brain injury on head CT. This is a form of outcomes data or a clinical standard.
    • For Cognitive Performance Tests: Normative data derived from a population of healthy individuals. This establishes a baseline for cognitive function.
    • For QEEG parameters: Standards related to the accurate recording, measuring, analyzing, and displaying of brain electrical activity, as well as comparison to the predicate device's performance characteristics.

    8. The sample size for the training set:

    The document does not specify the sample size for the training set. It only mentions the "normative data collected from 707 healthy individuals" used to construct databases for the cognitive tests, which likely serves as a reference/validation set for these specific components. The EEG-based algorithms (SIC, BFI) were likely developed and trained on separate, larger datasets that are not detailed in this particular summary, as these algorithms are shared with the predicate device (BrainScope One).

    9. How the ground truth for the training set was established:

    The document does not provide details on how the ground truth for the training set was established for the core EEG algorithms (SIC, BFI) as these algorithms were already established and cleared under the predicate device (BrainScope One). For the new cognitive performance tests and PECARN, the ground truth for the "normative data" was established by collecting data from "healthy individuals," implying a healthy control group without known head injury or cognitive impairment.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3