Search Filters

Search Results

Found 8 results

510(k) Data Aggregation

    K Number
    K173915
    Manufacturer
    Date Cleared
    2018-03-22

    (90 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Test of Variables of Attention (T.O.V.A.) provides healthcare professionals with objective measurements of attention and inhibitory control. The visual T.O.V.A. aids in the assessment of, and evaluation of treatment for, attention deficits, including attention-deficit/hyperactivity disorder (ADHD). The auditory T.O.V.A. aids in the assessment of attention deficits, including ADHD. T.O.V.A. results should only be interpreted by qualified professionals.

    Device Description

    The Test of Variables of Attention (T.O.V.A.) is an accurate and objective continuous performance test (CPT) that measures the key components of attention and inhibitory control. The T.O.V.A. is used by qualified healthcare professionals in the assessment of attention deficits, including attention-deficit/hyperactivity disorder (ADHD), in children and adults. In addition, the visual T.O.V.A. is used to evaluate treatment for attention deficits, including ADHD.

    The T.O.V.A. is a culture- and language-free, sufficiently long computerized test that requires no left/right discrimination or sequencing. Responses to visual or auditory stimuli are recorded with a specially designed, highly accurate (±1 ms) microswitch. The T.O.V.A. calculates response time variability (consistency), response time (speed), commissions (impulsivity), and omissions (focus and vigilance). These calculations are then compared to a large age- and gender-matched normative sample (over 1,700 individuals for the visual test, and over 2,500 individuals for the auditory test), as well as to a sample population of individuals independently diagnosed with ADHD. These comparison results are used to create an immediately available, easy-to-read report.

    The T.O.V.A. system includes: a USB flash drive with software installer for Mac and Windows PCs, a T.O.V.A. USB device, a T.O.V.A. Microswitch, an Installation Guide, a User's Manual, a Clinical Manual, and accessory cables (USB, VGA, and audio cables).

    AI/ML Overview

    This document is a 510(k) premarket notification for the Test of Variables of Attention (T.O.V.A.) version 9.0. It primarily focuses on demonstrating substantial equivalence to a predicate device (QbTest) rather than outlining specific clinical trials to prove device performance against acceptance criteria in the manner one might find for an AI/ML medical device.

    Therefore, many of the requested details about acceptance criteria and a study proving those criteria are not explicitly present in the provided text, as the submission relies on demonstrating equivalence through technical specifications, safety, and functionality, rather than novel clinical performance endpoints.

    However, I can extract and infer information relevant to the various sections of your request based on the provided document:


    Acceptance Criteria and Device Performance (Inferred from Substantial Equivalence Claim)

    Since this is a 510(k) submission, the "acceptance criteria" are primarily established by demonstrating substantial equivalence to a predicate device, meaning the new device performs at least as safely and effectively as the predicate for its stated indications for use. There isn't a table of specific clinical performance metrics with numerical targets as would be seen for a new AI/AD.

    The "device performance" is primarily assessed against regulatory standards (electrical safety, EMC) and functional verification, as well as the comparison to the predicate device's established performance (which is implicitly "accepted" by prior FDA clearance).

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criterion (Inferred)Reported Device Performance (T.O.V.A. v9.0)
    Safety and Essential Performance (Electrical): Compliance with IEC 60601-1:2005Passed: "The T.O.V.A. passed the state of the art for electrical safety and functional testing." (Performed by UL, Inc.)
    Safety and Essential Performance (Electromagnetic Compatibility): Compliance with IEC 60601-1-2:2014Passed: "The T.O.V.A. passed the state of the art for electromagnetic compatibility and electromagnetic immunity testing." (Performed by Element, Inc.)
    Functional Verification: Device functions as intended. (Implicitly, without introducing new safety/effectiveness concerns compared to predicate).Passed: "The T.O.V.A. functioned as intended, passing all major verification tests, and was FDA cleared under K170082." (Performed by The TOVA Company).
    Timing Accuracy: Measurement of response times. (Compared to predicate, though predicate's accuracy is "Unknown").Accurate to ±1 milliseconds. (This is a specific performance claim for T.O.V.A.).
    Normative Data Sample Size: Sufficiently large and representative normative sample for comparison.Visual Test: Over 1,700 individuals (ages 4-80).
    Auditory Test: Over 2,500 individuals (ages 6-29). (This is larger than the predicate's 1,307 individuals for visual, ages 6-60).
    Device Components and Functionality: Similar technological characteristics and principles of operation to predicate.Similar: Both are Continuous Performance Tests (CPT) with Go/No Go tasks and visual stimuli (geometric shapes). Both use a microswitch for subject response. T.O.V.A. adds auditory stimuli, has larger normative data, and is compatible with Mac in addition to Windows. These are deemed "minor technological differences" that "raise no new issues of safety or effectiveness."
    Intended Use/Indications for Use: Substantially equivalent clinical utility for assessing attention deficits and ADHD.Substantially Equivalent: Both provide objective measurements for assessing and evaluating treatment for attention deficits/ADHD. T.O.V.A. adds specific mention of auditory test for assessment of attention deficits, including ADHD. The scope of use remains for qualified professionals.

    2. Sample size used for the test set and the data provenance:

    • Test Set (for the K173915 submission specifically to demonstrate substantial equivalence): The document primarily refers to technical testing (IEC 60601-1, IEC 60601-1-2, T.O.V.A. Verification Testing) and a comparison with the predicate device's specifications. These are not "test sets" in the sense of a clinical performance study with patient data against a ground truth.
      • For the technical verification, there isn't a specified "sample size" of devices, but rather laboratory testing of the device system.
      • The "normative samples" used to establish the large age- and gender-matched comparison data for the T.O.V.A. itself are:
        • Visual Test: Over 1,700 individuals (stated as 1,714 in the comparison table)
        • Auditory Test: Over 2,500 individuals (stated as 2,680 in the comparison table)
      • Data Provenance: Not specified in terms of country of origin or retrospective/prospective. The description of normative samples and "sample population of individuals independently diagnosed with ADHD" suggests pre-existing or collected data, but the collection methodology (retrospective/prospective) is not detailed for this specific 510(k) summary.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • This is not applicable as there isn't a "test set" in the context of clinical images or patient cases requiring expert interpretation for ground truth. The device measures objective quantities (response time, omissions, commissions).
    • For the normative data, the document mentions comparison to a "sample population of individuals independently diagnosed with ADHD." This implies that clinical diagnoses (established by qualified healthcare professionals) served as a form of "ground truth" for the ADHD sample, but the number or specific qualifications of these diagnosing professionals are not provided. The T.O.V.A. itself is intended to "aid in the assessment" and "results should only be interpreted by qualified professionals."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable. There is no clinical imaging or interpretative "test set" here that would require an adjudication process.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not applicable. This device is not an AI-assisted diagnostic tool that aids human readers in interpreting complex data like medical images. It's a continuous performance test that generates objective measurements.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The T.O.V.A. is inherently a "standalone" device in terms of its measurement capabilities, as it directly calculates various performance metrics (response time, omissions, etc.). However, it is explicitly stated that "T.O.V.A. results should only be interpreted by qualified professionals." This means it's a tool for professionals, not a standalone diagnostic that provides a final diagnosis without human interpretation. So, while the measurement itself is algorithmic, the clinical utility is human-in-the-loop.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the "sample population of individuals independently diagnosed with ADHD," the ground truth was clinical diagnosis (presumably by qualified professionals).
    • For the normative data, the "ground truth" is simply a large, healthy normative population against which patient results are compared.

    8. The sample size for the training set:

    • The term "training set" doesn't strictly apply in the sense of an AI/ML algorithm being trained on data to learn patterns.
    • However, the normative samples which the device uses for comparison can be considered analogous to a "training set" for establishing statistical norms. These are:
      • Visual T.O.V.A.: Over 1,700 individuals (1,714)
      • Auditory T.O.V.A.: Over 2,500 individuals (2,680)

    9. How the ground truth for the training set was established:

    • For the normative samples, the "ground truth" was established by collecting data from a large population of individuals, presumably healthy or without a pre-existing ADHD diagnosis at the time of data collection for the normative comparison. The document states they are "age- and gender-matched normative sample."
    • For the "sample population of individuals independently diagnosed with ADHD" (used for comparison to distinguish ADHD from normative data), the ground truth was established by independent clinical diagnoses of ADHD. The specifics of how these diagnoses were confirmed (e.g., DSM criteria, multiple clinicians, etc.) are not provided in this 510(k) summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    K170082
    Manufacturer
    Date Cleared
    2017-05-17

    (127 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Test of Variables of Attention (T.O.V.A.) provides healthcare professionals with objective measurements of attention and inhibitory control, which aid in the assessment of attention deficits, including attention-deficit/hyperactivity disorder (ADHD). T.O.V.A. results should only be interpreted by qualified healthcare professionals.

    Device Description

    The Test of Variables of Attention (T.O.V.A.) is an accurate and objective continuous performance test (CPT) that measures the key components of attention and inhibitory control. The T.O.V.A. is used by qualified healthcare professionals in the assessment of attention deficits, including attention-deficit/hyperactivity disorder (ADHD), in children and adults.
    The T.O.V.A. is a culture- and language-free, sufficiently long computerized test that requires no left/right discrimination or sequencing. Responses to visual or auditory stimuli are recorded with a specially designed, highly accurate (±1 ms) microswitch. The T.O.V.A. calculates response time variability (consistency), response time (speed), commissions (impulsivity), and omissions (focus and vigilance). These calculations are then compared to a large age- and gender-matched normative sample (over 1,700 individuals for the visual test, and over 2,500 individuals for the auditory test), as well as to a sample population of individuals independently diagnosed with ADHD. These comparison results are used to create an immediately available, easy-to-read report.
    The T.O.V.A. system includes: a USB flash drive with software installer for Mac and Windows PCs, a T.O.V.A. USB device, a T.O.V.A. Microswitch, an Installation Guide, a User's Manual, a Clinical Manual, and accessory cables (USB, VGA, and audio cables).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Test of Variables of Attention (T.O.V.A.) version 9.0 device, based on the provided text:

    Acceptance Criteria and Device Performance

    The provided text focuses on the device's adherence to safety, essential performance, and electromagnetic compatibility (EMC) standards rather than clinical performance metrics for diagnosing ADHD. The acceptance criteria essentially stem from proving substantial equivalence to the predicate device, the Gordon Diagnostic System Model I (K854903), in these engineering and regulatory aspects.

    Acceptance Criterion (Test)Reported Device Performance (Results)
    IEC 60601-1:2005 "Medical electrical equipment - Part 1: General requirements for basic safety and essential performance"The T.O.V.A. passed the state of the art for electrical safety and functional testing. It is considered equivalent or better in safety and functionality compared to the predicate GDS.
    IEC 60601-1-2:2014 "Medical electrical equipment - Part 1-2: General requirements for basic safety and essential performance - Collateral Standard: Electromagnetic disturbances - Requirements and tests"The T.O.V.A. passed the state of the art for electromagnetic compatibility and electromagnetic immunity testing. It is considered equivalent or better in electrical performance compared to the predicate GDS.
    T.O.V.A. Verification Testing (Software functionality)The T.O.V.A. functioned as intended, passing all major verification tests as described in "Section 16 – Software", specifically the "Verification and Validation Summary ("VVS01")". Minor unresolved anomalies were found and listed. The device is deemed equivalent or even more rigorously verified compared to the GDS (for which no equivalent verification tests were listed in its 1989 510(k)).

    Study Information Pertaining to Device Verification and Validation:

    The document primarily describes a set of verification and validation activities rather than a clinical study evaluating diagnostic performance. The focus is on demonstrating that the T.O.V.A. 9.0 functions correctly and meets established engineering standards and design requirements.

    1. Sample Size used for the test set and data provenance:

      • IEC 60601-1:2005 & IEC 60601-1-2:2014: These tests involve the physical device itself and do not typically use "patient samples." The "test set" here refers to the device prototypes subjected to standardized electrical and functional tests.
      • T.O.V.A. Verification Testing: This involved the T.O.V.A. system (software and hardware). The "test set" refers to the various functions and modules of the software and hardware that were tested according to a detailed matrix of requirements. No human participant data is mentioned for these verification tests.
      • Normative Data: While not a "test set" for performance evaluation against a gold standard in the context of device accuracy, the device's internal comparison data uses a large normative sample:
        • Visual test: over 1,700 individuals
        • Auditory test: over 2,600 individuals
        • These normative samples were described as "age- and gender-matched" and "non-ADHD individuals." The provenance (country of origin, retrospective/prospective) is not specified in the provided text.
    2. Number of experts used to establish the ground truth for the test set and their qualifications:

      • For the engineering and software verification tests described (IEC standards, T.O.V.A. Verification Testing), the "ground truth" is defined by the technical specifications of the standards and the design requirements. The "experts" would be the certified test engineers at the third-party facilities (UL, Inc. and Element, Inc.) and The TOVA Company's internal design and quality assurance teams, who assess compliance with these objective standards. Specific qualifications beyond being "third-party test facilities" are not detailed.
      • Regarding the normative data used by the device (not to test the device's diagnostic accuracy directly), the text mentions "a sample population of individuals independently diagnosed with ADHD." However, it does not specify the number or qualifications of experts who established these independent ADHD diagnoses.
    3. Adjudication method for the test set:

      • For the engineering and software verification tests, the adjudication method is typically direct assessment against the defined standard or requirement. There isn't an "adjudication method" in the sense of multiple experts reviewing cases to establish a clinical ground truth.
      • For the normative data and the "independently diagnosed with ADHD" population, the adjudication method for the ADHD diagnosis itself is not described.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study is mentioned in the provided text. The T.O.V.A. device provides objective measurements to aid in assessment by healthcare professionals; it is not described as an AI-assisted diagnostic tool where human readers improve with its assistance in the typical sense of interpreting images or complex data with AI overlays. Its function is to generate objective metrics for a clinician's interpretation.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The T.O.V.A. is described as providing "objective measurements" that "aid in the assessment of attention deficits." It explicitly states that "T.O.V.A. results should only be interpreted by qualified healthcare professionals." This indicates it's designed as a tool for a human-in-the-loop workflow, not as a standalone diagnostic device. Therefore, a standalone performance study in a diagnostic capacity would not be applicable or described for this device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the engineering tests, the ground truth is compliance with the IEC 60601-1 and IEC 60601-1-2 standards and the device's own design specifications (for verification testing).
      • For the context of the device's internal comparisons, it uses a "normative sample" of "non-ADHD individuals" and a "sample population of individuals independently diagnosed with ADHD." The ground truth for these populations is derived from clinical diagnosis (likely expert consensus or established diagnostic criteria for ADHD), though the specifics are not detailed.
    7. The sample size for the training set:

      • The document does not describe a machine learning algorithm that requires a "training set" in the conventional sense. The T.O.V.A. functions by comparing an individual's performance to existing "normative data." This normative data could be considered analogous to a "training set" for establishing typical ranges.
        • Visual normative data: 1,714 individuals
        • Auditory normative data: 2,680 individuals
    8. How the ground truth for the training set was established:

      • For the normative data (analogous to a training set), the ground truth for these individuals was established as "non-ADHD individuals." This implies that they were clinically confirmed not to have ADHD. The specific diagnostic process or criteria used to classify these "non-ADHD" individuals are not detailed in the provided text. It is reasonable to assume this was established through standard clinical diagnostic procedures, excluding individuals with an ADHD diagnosis.
    Ask a Question

    Ask a specific question about this device

    K Number
    K143468
    Device Name
    QbCheck
    Manufacturer
    Date Cleared
    2016-03-22

    (474 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QbCheck provides health care professionals with objective measurements of hyperactivity, and inattention to aid in the clinical assessment of ADHD and in the evaluation of treatment interventions in patients with ADHD. Q6Check results should be interpreted only by qualified health care professionals.

    Device Description

    QbCheck is a non-invasive test that has been developed to provide precise quantitative assessment of the capacity of an individual to pay attention to visual stimuli and inhibit impulses. There are three cardinal disturbances in Attention-Deficit Hyperactivity Disorder (ADHD); impaired attention, hyperactivity and impulsivity. QbCheck provides an accurate and reproducible measure of an individual's capacity in each of these three domains by utilizing a consistent challenge paradigm coupled with detailed real-time measurements of behavior and performance. The fundamental core of QbCheck is a computer-assisted attention and impulse control task and simultaneous recording of activity. QbCheck is an online solution and no extra hardware is needed as the test is performed on the user's own computer. For the activity tracking analysis QbCheck uses the built in camera in the user's laptop or a separate web camera on the user's desktop computer. QbCheck consists of the following;
    ● QbCheck client and server software
    . Online test with instructions, a continuous performance task (CPT) and motion measurement technology through web camera.

    • Access to a remote server which generates test results .
    • Secure access to a result report
    • User manual ●
    • Technical manual
    • QbCheck Behavior Observation Form .
    AI/ML Overview

    The QbCheck device provides objective measurements of hyperactivity, impulsivity, and inattention to aid in the clinical assessment and treatment evaluation of ADHD.

    Here's a breakdown of the acceptance criteria and the study information provided:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state numerical acceptance criteria in a dedicated table format with reported performance. However, it states that QbCheck is substantially equivalent to its predicate device, QbTest (K133382). This implies that its performance is considered comparable to the predicate for its intended use. The primary difference highlighted is the use of a webcam for motor activity tracking instead of an infrared camera and reflective marker, and the use of the spacebar instead of a responder button.

    The substantial equivalence determination is the "acceptance criterion" in this context, demonstrating that the device performs as intended and is as safe and effective as a legally marketed device.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not explicitly state the sample size used for a specific "test set" solely to demonstrate performance against acceptance criteria for this K143468 submission. However, it references a comparison to the predicate device, QbTest (K133382). Clinical studies for the predicate would contain this information.

    Regarding data provenance for QbCheck, the document states: "For the activity tracking analysis QbCheck uses the built in camera in the user's laptop or a separate web camera on the user's desktop computer." This implies the data is generated from users performing the test. The document does not specify country of origin or whether studies were retrospective or prospective for this specific submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    The document does not detail the number or qualifications of experts used to establish ground truth for a test set for the QbCheck device itself. The interpretation of QbCheck results is stated to be "only by qualified health care professionals," which implies that clinical judgment from trained professionals would form the basis of actual ADHD diagnosis and evaluation.

    4. Adjudication Method for the Test Set:

    Given the information provided, there is no mention of an adjudication method (like 2+1, 3+1) for a test set directly comparing QbCheck's output against a "ground truth" established by experts. The focus of this 510(k) summary is on demonstrating substantial equivalence to the predicate device through technological comparison and intended use similarity.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The submission focuses on substantial equivalence based on technical characteristics and intended use similar to the predicate QbTest.

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    The document describes QbCheck as a "computer-assisted attention and impulse control task and simultaneous recording of activity." Its results are intended to "aid in the clinical assessment," and "QbCheck results should be interpreted only by qualified health care professionals." This structure implies it is a tool to be used with human interpretation, not a standalone diagnostic that makes decisions without human-in-the-loop performance. Therefore, a standalone study without human interpretation of results would not be appropriate for its indicated use.

    7. The Type of Ground Truth Used:

    The document makes no explicit mention of the type of ground truth (e.g., expert consensus, pathology, outcomes data) used in a study to establish the diagnostic accuracy of QbCheck itself. The predicate device, QbTest, and by extension QbCheck, provide "objective measurements of hyperactivity, impulsivity, and inattention," which aid in clinical assessment of ADHD. The "ground truth" for ADHD diagnosis typically involves a comprehensive clinical evaluation by qualified healthcare professionals based on diagnostic criteria (e.g., DSM-5), which may include parental/teacher reports, clinical interviews, and behavioral observations, in addition to objective measures like those provided by QbCheck.

    8. The Sample Size for the Training Set:

    The document does not specify the sample size used for any training set for QbCheck.

    9. How the Ground Truth for the Training Set Was Established:

    The document does not specify how ground truth for any training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141865
    Device Name
    DANA
    Manufacturer
    Date Cleared
    2014-10-15

    (97 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DANA provides clinicians with objective measurements of reaction time (speed and accuracy) to aid in the assessment of an individual's medical or psychological state. Factors that may affect the measurement of reaction time include, but are not limited to concussion, head injury, insomnia, post traumatic stress disorder (PTSD), depression, attention deficit hyperactivity disorder (ADHD), memory impairment, delirium, prescription and non-prescription medication, some nutritional supplements, as well as a variety of psychological states (e.g. fatigue and stress).

    DANA also delivers and scores standardized psychological questionnaires. DANA results should be interpreted only by qualified professionals.

    Device Description

    DANA is a mobile application indicated to provide clinicians with objective measurements of reaction time (speed and accuracy) and standardized health assessments to aid in the assessment of an individual's medical or psychological state. DANA results should be interpreted only by qualified professionals.

    DANA was developed on a mobile platform to improve the access and availability of reaction time tests and standardized health assessments through (1) custom configuration of the system by clinicians based on their need and discretion; and (2) allowing for objective health assessments both in-clinic and out-of-clinic settings.

    AI/ML Overview

    The provided document is a 510(k) summary for the DANA device, an unclassified mobile-based task performance recorder. It compares DANA to a predicate device (QbTest) and mentions software testing, but it does not contain the detailed information necessary to fully address all parts of your request regarding acceptance criteria and the comprehensive study that proves the device meets those criteria.

    Specifically, the document does not include:

    • A table of acceptance criteria or reported device performance metrics (e.g., sensitivity, specificity, accuracy, precision, recall for diagnostic devices, or specific reaction time performance metrics like mean, standard deviation, error rate, etc.).
    • Details on sample size for a test set, data provenance, ground truth establishment for a test set, or adjudication methods.
    • Information on Multi-Reader Multi-Case (MRMC) studies or standalone algorithm performance studies.
    • Details on sample size for the training set or how ground truth for the training set was established.

    The document primarily focuses on demonstrating substantial equivalence to the predicate device based on intended use and technological characteristics, and mentions general software testing.

    Given the limitations of the provided text, I will answer the questions to the best of my ability, indicating where information is not present.


    Acceptance Criteria and Study Details for DANA Device

    1. A table of acceptance criteria and the reported device performance

    The provided document does not contain a table of acceptance criteria or specific reported device performance metrics for the DANA device (e.g., accuracy of reaction time measurement, consistency, precision, etc.). The 510(k) summary states that "Differences in the design and performance of DANA from the QbTest do not affect either the safety or effectiveness of DANA for its intended use," which is a high-level statement for substantial equivalence, but it does not quantify performance.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify a sample size for a "test set" or provide details on data provenance (country of origin, retrospective/prospective study design). The summary mentions "Software testing was conducted in accordance with FDA's May 2005 guidance document," which relates to software validation rather than clinical performance testing with a specific test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not mention the use of experts to establish ground truth for a test set or their qualifications. The DANA device is described as providing "objective measurements of reaction time (speed and accuracy)" and scoring "standardized psychological questionnaires." These types of measurements typically rely on predefined algorithms and the user's interaction directly with the device, rather than subjective expert interpretation for ground truth. The interpretation of DANA results is specified to be done by "qualified professionals," but this is about result usage, not ground truth establishment for performance validation of the device itself.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication method as no specific test set requiring such expert adjudication is detailed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The DANA device provides objective measurements and scores questionnaires, it is not described as an AI-driven assistive tool for human readers in a diagnostic context that would typically warrant an MRMC study to measure improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes DANA as a "mobile application indicated to provide clinicians with objective measurements of reaction time (speed and accuracy) and standardized health assessments." This implies a standalone (algorithm-only) performance in terms of generating these raw measurements and scores. The device takes inputs (user interactions) and produces outputs (reaction time data, questionnaire scores) without a specific "human-in-the-loop" component in the measurement generation process itself, though human interpretation of the results is required. However, no specific "standalone study" with detailed metrics is described.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document does not explicitly state the type of ground truth used for validating the device's performance because the nature of the device (measuring reaction time and scoring questionnaires) implies that its "ground truth" relates to the accuracy and reliability of its internal clock, input detection, and scoring algorithms, rather than a diagnostic 'truth' like pathology. For reaction time, the ground truth would be the actual time elapsed between stimulus and response, measured by highly accurate timing mechanisms. For questionnaires, the ground truth would be the correct application of the scoring rules. No validation study details are provided to elaborate on how these internal "ground truths" were confirmed.

    8. The sample size for the training set

    The document does not mention a training set sample size. AI/ML software often uses training sets, but the description of DANA focuses on objective measurement and standardized questionnaire scoring, which may not heavily rely on complex supervised machine learning models requiring large labeled training sets in the same way an image recognition AI would. The software testing mentioned is more likely related to functional and performance testing against specifications rather than AI model training.

    9. How the ground truth for the training set was established

    Since no training set is mentioned (see point 8), the document does not provide information on how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K133382
    Device Name
    QB TEST
    Manufacturer
    Date Cleared
    2014-03-24

    (140 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QbTest provides clinicans with objective measurements of hyperactivity, and inatention to aid in the clinical assessment of ADHD (Attention Deficit Hyperactivity Disorder) an interentions in patients with ADHD. QbTest results should be interpreted only by qualified professionals.

    Device Description

    QbTest is a non-invasive test that has been developed to provide precise quantitative assessment of the capacity of an individual to pay attention to visual stimuli and inhibit impulses. There are three cardinal disturbances in Attention-Deficit Hyperactivity Disorder (ADHD): impaired attention, hyperactivity and impulsivity. QbTest provides an accurate and reproducible measure of an individual's capacity in each of these three domains by utilizing a consistent challenge paradigm coupled with detailed real-time measurements of behavior and performance. The fundamental core of QbTest is a computer-assisted attention and impulse control task and simultaneous recording of activity using an infrared camera for motion measurements.

    The system consists of the following components:

    • · Client software
    • · Responder button (also referred to as responder unit)
    • · Infrared camera
    • · Reflective motion marker
    • · User manual
    • Technical manual
    • · Stimulus card
    • · Camera stand
    • · Measuring tape
    • · QbTest Behavior Rating Scale
    • · In addition, the user must have access to a remote server that generates test reports
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study information for the QbTest device, based on the provided 510(k) summary:

    QbTest Acceptance Criteria and Performance Study Summary

    1. Table of Acceptance Criteria and Reported Device Performance

    The 510(k) summary does not explicitly state formal "acceptance criteria" in terms of predefined thresholds for performance metrics. Instead, it presents the results of clinical studies to demonstrate the device's ability to measure treatment effects in ADHD. The key performance indicators used are Effect Size (ES) and Agreement (NPA, PPA) with established clinical rating scales.

    Performance MetricAcceptance Criteria (Implicit)Reported Device Performance (Pooled Cohort)
    Effect Size (ES) - Central Stimulants (CS) / AtomoxetineFor single-dose/short-term studies: Demonstrate statistically significant effects and generally "large" effect sizes (Cohens d > 0.8, partial eta-square > 0.14) for key ADHD domains (hyperactivity, inattention, impulsivity) as an objective measure of treatment responsiveness. For longer-term studies: Demonstrate statistically significant effects and moderate to large effect sizes (partial eta-square > 0.06 - 0.14) for key ADHD domains. For overall treatment response (QbTest Total score vs. RS Total score): Demonstrate comparable, statistically significant effect sizes to clinically validated rating scales (RS) for assessing treatment interventions. (Implicitly, the device should be able to detect treatment effects similar to or better than existing methods).Placebo-controlled study (Atomoxetine): - Hyperactivity (Time active, Distance, Area, Microevents): ES (Cohen's d) 0.85-1.49 (Large) - Inattention (Reaction Time Variation, Omission Errors): ES (Cohen's d) 1.24, 0.8 (Large) - Impulsivity (Commission Errors): ES (Cohen's d) 0.82 (Large) Placebo-controlled study (Methylphenidate, Dexamphetamine): - Overall treatment effect (ES): 0.62 (partial eta-square) Long-term study (Methylphenidate in adults): - Hyperactivity: ES (partial eta-square) 0.43-0.51 (Moderate to Large) - Inattention: ES (partial eta-square) 0.51, 0.46 (Moderate to Large) - Impulsivity: ES (partial eta-square) 0.28 (Small to Moderate) Registry Study (QbTest Total vs. RS Total): - QbTest Total Score ES (Cohens d): 1.06 (Large) - RS Total Score ES (Cohens d): 0.98 (Large)
    Agreement with Rating Scales (RS) for Treatment ResponseDemonstrated agreement with clinically-based thresholds for "meaningful response to treatment" using a cut-off of -0.5 Q-scores for QbTest and -30% change in ADHD RS Total score. While no specific minimum acceptable percentages are given, the study aims to show the correlation and rates of agreement.Pooled Cohort: - NPA: 40% (CI: 28% to 53%) - PPA: 75% (CI: 63% to 85%)

    Important Note: The document explicitly states: "Although the ES in the above studies all show large treatment effects they must be interpreted with caution since two of the studies did not include a concurrent control arm." It also mentions, "The registry study showed statistically significant but low correlations between QbTest and the clinically validated rating scales (RS)." This suggests that while effects are detectable, agreement is not perfect, necessitating clinical evaluation alongside QbTest results.

    2. Sample Sizes Used for the Test Set and Data Provenance

    The "test set" in this context refers to the data used for the clinical performance evaluation for the expanded intended use (evaluation of treatment interventions).

    • Registry Study (comparing QbTest with Rating Scales):

      • Pooled Cohort: 115 patients (42 children/adolescents + 73 adults)
      • Child Cohort: 42 children/adolescents (mean age 11.5, 35 male, 7 female)
      • Adult Cohort: 73 adults (mean age 35, 41 male, 32 female)
      • Data Provenance: Retrospective, from clinical centers. The child cohort consisted of Swedish children, and the adult cohort consisted of Dutch adults.
    • Other Published Clinical Studies (evaluating responsiveness/effect size):

      • Study 5 (Atomoxetine vs. Placebo): 128 children with ADHD (mean age 9.0)
      • Study 6 (Methylphenidate, Dexamphetamine vs. Placebo): 36 medication-naïve children (aged 9-14 years)
      • Study 7 (Methylphenidate in Adults): 23 adults

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • The ground truth for the Effect Size (ES) was based on the change from baseline in QbTest metrics itself or in the ADHD Rating Scales (RS). The clinical diagnosis of ADHD and assessment of treatment response using rating scales would have been performed by "qualified professionals" as stated in the intended use.
    • For the Agreement with Rating Scales, the "ground truth" for the rating scales (RS) was implicitly established by the application of those scales by clinicians.
    • The document does not specify the number or specific qualifications (e.g., number of years of experience) of the experts/clinicians who established the diagnoses or administered the rating scales in these studies. It only refers to "qualified professionals."

    4. Adjudication Method for the Test Set

    • The document does not describe an explicit adjudication method for establishing ground truth from multiple experts for the clinical studies mentioned.
    • For the comparison between QbTest and Rating Scales, the methods are compared directly, meaning the "ground truth" for each is independently derived (QbTest metrics vs. RS scores). The agreement (NPA/PPA) is essentially a comparison rather than an adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, an MRMC comparative effectiveness study was not described. The QbTest is described as a diagnostic aid that provides "objective measurements" to supplement clinical assessment, not as an AI system assisting human readers/clinicians in interpreting other data. The studies focused on the device's standalone ability to detect treatment effects and its correlation with traditional rating scales.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance assessment was done in the sense that QbTest's measurements and the derived "Q-scores" are direct outputs of the device's algorithms based on patient behavior and performance during the task. The effect sizes reported for QbTest are based solely on these objective measurements, demonstrating its standalone capacity to detect changes due to treatment.
    • The comparison with Rating Scales in the registry study also assesses the standalone performance of QbTest against another established standalone method (RS).

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • The primary "ground truth" in these studies consists of clinical diagnoses of ADHD (DSM-IV criteria mentioned) and changes in symptoms and behavior due to treatment, as measured by:
      • QbTest's objective metrics (Time active, Distance, Area, Microevents, Reaction Time Variation, Omission Errors, Commission Errors).
      • Clinically validated Rating Scales (RS) for ADHD, with a specific, clinically-derived threshold for "meaningful response" (-30% change).
      • In placebo-controlled studies, the "ground truth" for treatment effect is the difference in outcomes between the active treatment group and the placebo group.

    8. The Sample Size for the Training Set

    • The document does not explicitly describe a "training set" for the QbTest device's core algorithms. The QbTest is presented as a system that provides quantitative measurements based on standardized tasks and infrared motion tracking.
    • The studies mentioned are for the validation of the device's ability to measure treatment effects, not for training a machine learning model. The algorithms for calculating QbTest metrics and Q-scores are expected to be established during the device's development, prior to these clinical validation studies.

    9. How the Ground Truth for the Training Set Was Established

    • As no explicit training set for a machine learning model is mentioned, there is no information provided on how ground truth for a training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K122149
    Device Name
    QBTES
    Manufacturer
    Date Cleared
    2012-10-17

    (90 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QbTest is indicated to be used to aid in the clinical assessment of ADHD. QbTest results should be interpreted by qualified health care professionals only.

    QbTest provides clinicians with objective measurements of hyperactivity, impulsivity, and inattention to aid in the clinical assessment of ADHD. QbTest results should be interpreted only by qualified professionals.

    Device Description

    QbTest is a non-invasive test that has been developed to provide precise quantitative assessment of the capacity for an individual to pay attention to visual stimuli and inhibit impulses. There are three cardinal disturbances in Attention-Deficit Hyperactivity Disorder (ADHD); impaired attention, hyperactivity and impulsivity. QbTest provides an accurate and reproducible measure of an individual's capacity in each of these three domains by utilizing a consistent challenge paradigm coupled with detailed real-time measurements of behavior and performance. The fundamental core of QbTest is a computer-assisted attention and impulse control task and simultaneous recording of activity using an infrared camera for motion measurements.

    The system consists of the following components:

    • Client software
    • Responder button (also referred to as responder unit)
    • Infrared camera
    • Reflective motion marker
    • User manual
    • Technical manual
    • Stimulus card
    • Camera stand
    • Measuring tape
    • QbTest Behaviour Rating Scale
    • In addition, the user must have access to a remote server that generates test reports
    AI/ML Overview

    The provided FDA 510(k) summary for QbTest v3.5 is a predicate device comparison, rather than a typical AI/ML medical device submission with specific performance acceptance criteria for an algorithm. Therefore, the information typically found for AI/ML device validation studies (like sensitivity/specificity, ROC curves, MRMC studies, precise ground truth establishment for a test set, etc.) is largely absent.

    The submission focuses on demonstrating substantial equivalence to a previously cleared device (QbTest K040894) and the Gordon Diagnostic System (K854903) by showing similar intended use, technological characteristics, and safety/performance based on normative data collection and prior published clinical studies of the device and its predecessor.

    Here's an attempt to answer your questions based on the provided text, highlighting what is present and what is not:


    1. Table of Acceptance Criteria and Reported Device Performance

    Strict "acceptance criteria" as you'd find in an AI/ML device validation (e.g., minimum sensitivity or specificity) are not stated in this 510(k) summary. The performance is demonstrated through:

    Acceptance Criteria (Implied)Reported Device Performance
    Demonstrated safety and effectiveness (as per predicate)System tested to EN60601-1 and EN60601-1-2 standards.
    Provision of objective measurements for hyperactivity, impulsivity, and inattentionQbTest provides measurements in these domains.
    Aid in clinical assessment of ADHDFour published studies evaluated clinical validity.
    Reliability (test-retest consistency)Two test-retest studies completed.
    Normative data for interpretation (age/gender-specific)Normative database of 1307 individuals (6-60 years).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Clinical Validation/Normative Data:
      • "Normative tests have been gathered from several different cohorts resulting in a normative database of 1307 individuals between 6 and 60 years with an even age and gender distribution." This 1307-individual dataset serves as the primary "reference" or "test set" against which individual patient performance is compared. It's not a "test set" in the sense of an independent validation set for algorithm performance, but rather a normative reference.
      • The submission also mentions "four published studies which have evaluated the clinical validity of the QbTest for its intended use population" and "two test-retest studies." The individual sample sizes for these specific studies are not provided in this summary.
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). However, the general nature of normative data collection often implies a prospective or at least a systematic retrospective collection from a defined population. The submitter is Swedish (Qbtech AB, Stockholm, Sweden), which might suggest some data from that region, but this is speculative.

    3. Number of Experts Used to Establish Ground Truth and Their Qualifications

    • Ground Truth for QbTest: The QbTest itself generates the objective measurements of hyperactivity, impulsivity, and inattention. The "ground truth" for ADHD diagnosis isn't established by individual experts reviewing test data; rather, the test aids qualified healthcare professionals in making the diagnosis.
    • The "normative database" would have been collected from individuals (both with and without diagnosed ADHD, presumably) where their diagnostic status would have been established by qualified clinicians, but the number and qualifications of these clinicians are not specified.
    • The "four published studies" and "two test-retest studies" would have involved clinical professionals to manage and interpret the data, but no specific count or qualifications are provided in this summary.

    4. Adjudication Method for the Test Set

    • Not applicable in the typical sense of expert review for an algorithm's output. The QbTest itself produces quantitative output. Any diagnostic "ground truth" used in the underlying clinical studies (if those studies involve comparing QbTest outputs to clinical diagnoses) would likely follow standard clinical diagnostic procedures, which may involve adjudication, but this is not described in the 510(k) summary.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC study is not mentioned or described. This type of study (human readers with and without AI assistance) is typically performed for imaging or diagnostic algorithms that directly influence a human reader's interpretation. QbTest provides quantitative data that aids a clinician, rather than directly modifying their interpretation of, for example, an anatomical image.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, in essence. The QbTest itself is a standalone device that measures parameters (hyperactivity, impulsivity, inattention). Its performance is assessed by how well these measurements are collected and how consistently they reflect a person's behavior/performance. The "clinical validity" studies assess the utility of these measurements in aiding ADHD assessment, which is analogous to a standalone performance evaluation of the device's output. The summary refers to "four published studies which have evaluated the clinical validity of the QbTest for its intended use population."

    7. Type of Ground Truth Used

    • Clinical Diagnosis/Phenotype (implicit): For the "clinical validity" studies, the "ground truth" would likely be a clinical diagnosis of ADHD (or lack thereof) made by qualified professionals, following established diagnostic criteria (e.g., DSM criteria). The QbTest's output is then correlated with or assessed for its ability to discriminate based on this clinical "ground truth."
    • Observed Behavior/Performance (inherent): For the test-retest reliability studies, the ground truth is the inherent stability of an individual's performance and behavior on the test over time.
    • Normative Data: The "normative database" itself serves as a "ground truth" for what is considered typical performance for a given age and gender.

    8. Sample Size for the Training Set

    • Not Applicable / Not Explicitly Stated. The QbTest described is a direct measurement system, not a machine learning algorithm that is "trained" on a dataset in the conventional sense. The "normative database" of 1307 individuals functions more like a reference set rather than a "training set" for an AI model.

    9. How the Ground Truth for the Training Set Was Established

    • Not Applicable / Not Explicitly Stated. As it's not an ML training set, the concept of establishing ground truth for training doesn't apply directly. The "normative database" was established by collecting data from "several different cohorts" of individuals between 6 and 60 years old. The methods for this collection are described as being in the "technical manual," but not detailed here. Presumably, these were "healthy" or "typically developing" individuals to establish the "norm."

    Summary of Limitations based on the provided text:

    This 510(k) summary is typical for a non-AI/ML device that is seeking clearance based on substantial equivalence to an existing predicate. It heavily relies on prior clearance and existing clinical literature demonstrating the utility of the type of device. It does not provide the detailed information about AI/ML validation studies that are now common in submissions for software as a medical device (SaMD) utilizing AI/ML algorithms.

    Ask a Question

    Ask a specific question about this device

    K Number
    K040894
    Device Name
    QBTEST
    Manufacturer
    Date Cleared
    2004-06-22

    (77 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QbTest provides clinicians with objective measurements of hyperactivity, impulsivity, and inattention to aid in the clinical assessment of ADHD. QbTest results should be interpreted only by qualified professionals.

    Device Description

    The QbTest is a 15 minute, non invasive test that has been developed to provide precise quantitative assessment of the capacity of children to pay attention to visual stimuli while inhibiting their loco motor activity and controlling their urge to respond impulsively. There are three cardinal disturbances in Attention-Deficit Hyperactivity Disorder (ADHD) impaired attention, hyperactivity and impulsivity. QbTest provides an accurate and reproducible measure of a child's capacity in each of these three domains by utilizing a consistent challenge paradigm coupled with detailed real-time measurements of behavior and performance. The fundamental core of QbTest is a computer-administered go/not-go vigilance response task combined with motion capture.
    The system consists of the following components;
    • Client PC software
    • Connection box
    • Responder button
    • Camera for motion measurement
    • Reflective marker
    • USB and serial cable

    AI/ML Overview

    The provided 510(k) summary for the QbTest device does not contain a detailed study demonstrating acceptance criterion, or tables of acceptance criteria and reported device performance. It focuses on establishing substantial equivalence to a predicate device (OPTAx System K020800) and outlines the device's intended use and technical characteristics.

    However, based on the information provided, we can infer some details and highlight the missing information:

    1. Table of Acceptance Criteria and Reported Device Performance

    This information is not provided in the given 510(k) summary. The document focuses on demonstrating substantial equivalence to a predicate device by stating that the QbTest "provides the same or similar functions and has a similar design" and that "The new characteristics do not affect safety or effectiveness." It also mentions "Performance Testing" for the camera (EN60825-1:1994) and the system (EN 60601-1 and EN 60601-1-2), which are safety and electrical compatibility standards, not clinical performance acceptance criteria for ADHD assessment.

    To completely answer this question, a clinical study summary with specific performance metrics (e.g., sensitivity, specificity, accuracy, precision for hyperactivity, impulsivity, and inattention measurements) and their corresponding acceptance thresholds would be required.

    2. Sample Size Used for the Test Set and Data Provenance

    This information is not explicitly provided in the given 510(k) summary. The document does not describe a specific clinical performance test set, its sample size, or the provenance (country of origin, retrospective/prospective) of any clinical data used to support the device's effectiveness.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the given 510(k) summary. Since no specific clinical performance test set or ground truth establishment method is described, details about experts or their qualifications are absent.

    4. Adjudication Method for the Test Set

    This information is not provided in the given 510(k) summary. Given the absence of a described clinical test set, an adjudication method would not be detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Its Effect Size

    This information is not provided in the given 510(k) summary. The document does not mention any MRMC studies or a comparison of human reader performance with and without AI assistance. The QbTest described is an objective measurement tool for ADHD, not an AI-assisted diagnostic aid for human readers in the context of image interpretation, which is typical for MRMC studies.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The QbTest, as described, is a standalone algorithm/device for objective measurement. The "Intended use" states, "QbTest provides clinicians with objective measurements...to aid in the clinical assessment of ADHD." It's an independent diagnostic aid, not a component meant to be integrated into a human's judgment process during real-time interpretation. Therefore, its performance would inherently be "standalone." However, a specific study detailing its standalone performance independent of human interpretation for making a diagnosis is not described here. The provided text refers to the device itself as providing the objective measurements.

    7. The Type of Ground Truth Used

    This information is not explicitly provided in the given 510(k) summary. For a device like QbTest, ground truth for "hyperactivity, impulsivity, and inattention" would typically involve:

    • Clinical diagnosis of ADHD (or absence of ADHD) made by qualified clinicians using established diagnostic criteria (e.g., DSM-IV, DSM-5).
    • Behavioral observations or other psychometric assessments.

    The document implicitly suggests that the QbTest measurements aid in this clinical assessment, implying that the clinical assessment itself (likely expert consensus based on established criteria) serves as the ground truth against which the QbTest's utility is evaluated, but no study details are given.

    8. The Sample Size for the Training Set

    This information is not provided in the given 510(k) summary. There is no mention of a "training set" or a description of machine learning model development, which would typically involve such a set. The QbTest is described as utilizing a "consistent challenge paradigm coupled with detailed real-time measurements of behavior and performance," suggesting its output is based on pre-defined algorithms rather than a trained machine learning model in the modern sense.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned (see point 8), the method for establishing its ground truth is also not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K020800
    Device Name
    OPTAX SYSTEM
    Date Cleared
    2002-06-10

    (90 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    LQD

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The OPTAx System provides clinicians with objective measurements of hyperactivity, impulsivity and inattention to aid in the clinical assessment of ADHD. OPTAx results should be interpreted only by qualified professionals.

    Device Description

    OPTAx (Optical Tracking and Attention test) is a 15 minute non-invasive, office-based test that was developed to provide precise quantitative assessment of the capacity of children to pay attention to visual stimuli, while inhibiting their locomotor activity and controlling their impulsive responses. The three core symptoms of Attention-Deficit Hyperactivity Disorder (ADHD) are : impaired inattention, hyperactivity, and impulsivity. OPTAx provides an accurate and reproducible measure of a child's capacity in each of these three domains by utilizing a consistent challenge paradigm, coupled with detailed real-time measurements of behavior and performance. The fundamental core of OPTAx is a computer-administered GO/NO-GO vigilance response task.

    The OPTAx System consists of the following components:

    • . IMac computer and peripherals (printer and adapters)
    • Pre-installed OPTAx software program ●
    • . Motion analysis camera and peripherals
    • Data analysis software on the OPTAx secure server .
    AI/ML Overview

    Unfortunately, the provided text does not contain detailed information about the acceptance criteria or the specific clinical study that proves the OPTAx System meets such criteria.

    The document is a 510(k) summary for the OPTAx System, primarily focusing on its substantial equivalence to a predicate device (Gordon Diagnostic System, K854903) for the purpose of aiding in the clinical assessment of ADHD.

    Here's a breakdown of what is available and what is missing based on your request:

    What is Available:

    • Device Description: OPTAx is a 15-minute non-invasive, office-based test for quantitative assessment of attention, inhibition of locomotor activity, and control of impulsive responses in children. It uses a computer-administered GO/NO-GO vigilance response task.
    • Intended Use: To provide clinicians with objective measurements of hyperactivity, impulsivity, and inattention to aid in the clinical assessment of ADHD.
    • Comparison to Predicate: Both OPTAx and the Gordon System aid in ADHD assessment by providing objective measurements of impulsivity and inattention; OPTAx additionally provides objective measurements of hyperactivity. Both are microprocessor-based vigilance task recorders.
    • Performance Testing Mention: "Bench testing of the camera to EN 60825-1-1994, safety standards has been performed. Clinical testing has also been performed on the OPTAx System." This statement acknowledges clinical testing but provides no details about it.

    What is Missing (and thus cannot be provided in the requested format):

    1. Table of Acceptance Criteria and Reported Device Performance: This information is not present. The document does not define specific performance metrics (e.g., sensitivity, specificity, accuracy, precision for different symptoms) or the targets for these metrics that would constitute "acceptance criteria." Consequently, no "reported device performance" against such criteria is listed.
    2. Sample Size used for the test set and data provenance: The document mentions "Clinical testing has also been performed," but does not specify the sample size used, the characteristics of the test set, or its provenance (e.g., country of origin, retrospective/prospective).
    3. Number of experts used to establish the ground truth for the test set and qualifications: This information is not provided.
    4. Adjudication method for the test set: Not mentioned.
    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study: The document does not describe such a study. Since the device is a diagnostic aid providing objective measurements rather than an AI-assisted reader, the concept of "human readers improve with AI vs without AI assistance" does not directly apply in the way it would for image interpretation AI.
    6. Standalone performance (algorithm only without human-in-the-loop performance): While the OPTAx system provides "objective measurements," the document implies these are intended to aid clinical assessment by qualified professionals, suggesting a human-in-the-loop scenario for interpretation. However, specific standalone performance metrics for the algorithm itself are not detailed.
    7. Type of ground truth used: Not specified. Given the intended use to "aid in the clinical assessment of ADHD," the ground truth would likely involve clinical diagnosis of ADHD, but the method for establishing this (e.g., expert clinical diagnosis, diagnostic interviews, outcome data over time) is not discussed.
    8. Sample size for the training set: Not mentioned. The document primarily discusses the device and its intended use, not the development or training of its underlying algorithms.
    9. How the ground truth for the training set was established: Not mentioned.

    Conclusion:

    The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device based on intended use and technological characteristics. It lacks the detailed information regarding specific performance acceptance criteria and the results of clinical studies against those criteria that you are requesting. The statement "Clinical testing has also been performed" is a high-level acknowledgment without further substantiation in this particular document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1