Search Results
Found 6 results
510(k) Data Aggregation
(259 days)
QEA
The EyeBOX is intended to measure and analyze eye movements as an aid in the diagnosis of concussion, also known as mild traumatic brain injury (mTBI), in patients 5 through 67 years of age in conjunction with a standard neurological assessment of concussion.
A negative BOX Score classification may correspond to eye movement that is consistent with a lack of concussion within one week of injury.
A positive BOX Score classification corresponds to eye movement that may be present in both patients with or without concussion within one week of injury.
The EyeBOX provides a set of eye-tracking metrics in relation to a reference database of uninjured individuals.
Oculogica's EyeBOX EBX-4.1 is an eye-tracking device with custom software. The device is comprised of a host PC with integrated touchscreen interface for the operator, eye tracking camera, LCD stimulus screen and head-stabilizing rest (chin rest and forehead rest) for the patient, and data processing algorithm. The data processing algorithm detects subtle changes in eye movements resulting from concussion. The eye tracking task takes about 4 minutes to complete and involves watching a video move around the perimeter of a screen positioned in front of the patient while a high speed near-infrared (IR) camera records gaze positions 500 times per second. The post-processed data are analyzed automatically to produce one or more outcome measures. The device contains a rechargeable battery, which makes it possible to use without a direct connection to an active power source. The device has Wi-Fi and Ethernet capabilities, which optionally can provide the user with the ability to upload scans to a remote server, provide over-the-air software updates, and assist with customer support.
The provided FDA 510(k) Clearance Letter for EyeBOX EBX-4.1 (K242116) primarily focuses on demonstrating substantial equivalence to a predicate device (EyeBOX EBX-4, K212310) rather than presenting a detailed clinical study for novel acceptance criteria. The document states that the EyeBOX EBX-4.1's core diagnostic algorithm (BOX score) and hardware are "exactly the same" as its primary predicate. Therefore, the "acceptance criteria" and "device performance" in relation to a new clinical study are not explicitly defined or rigorously proven as they would be for a de novo device or a device with significant changes affecting its diagnostic performance.
However, the document does describe some performance testing related to additional eye-tracking metrics and software updates. It does NOT include a Multi-Reader Multi-Case (MRMC) comparative effectiveness study or a standalone algorithm performance study for the primary diagnostic function (BOX score).
Based on the provided text, here's an attempt to outline the requested information, acknowledging the limitations inherent in a 510(k) submission focused on substantial equivalence:
Acceptance Criteria and Device Performance (Based on "Additional Eye-tracking Metrics Performance Testing")
The document describes testing for newly included eye-tracking metrics. While not explicit "acceptance criteria" in the strict sense for diagnostic performance, the following table summarizes the reported performance for these new metrics, which are stated to be for "contextual clinical information" and "not intended to aid in diagnosis." The key acceptance criterion for these metrics appears to be their reliability as demonstrated by test-retest performance.
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Accuracy & Precision (General eye-tracking measurements) | Functioned as intended (qualitative statement) | Accuracy and precision testing completed for timestamp, gaze position, pupil size (diameter), and blink duration. Accuracy testing also conducted for blink count and saccade detection. (No quantitative results provided) |
Reliability (Test-Retest for Eye-tracking Metrics) | Expected variation between repeated tests represented by Bland-Altman 95% limits of agreement (LoA); "functioned as intended." | Bland-Altman 95% LoA values were determined for each metric. These values are incorporated into the device output. "In all instances, the EyeBOX Model EBX-4.1 functioned as intended." (No specific LoA values provided) |
2. Sample Size and Data Provenance
- Test Set Sample Size: 30 healthy individuals.
- Data Provenance: Retrospective, drawn "from the EyeBOX normative database." (No specific country of origin mentioned, but implies data collected during previous EyeBOX studies for normative data). The study itself was prospective in design for the "test-retest" aspect (i.e., participants underwent two tests for this specific study).
3. Number of Experts and Qualifications for Ground Truth
- Not Applicable (N/A) for this specific testing. The "Additional Eye-tracking Metrics Performance Testing" described is primarily a technical validation (accuracy, precision, test-retest reliability) of the eye-tracking measurements themselves, not a diagnostic performance study requiring expert ground truth on concussion. The device's diagnostic capability (BOX score) is deemed substantially equivalent to the predicate, K212310, and thus, its ground truth establishment would refer to the studies for that earlier clearance.
4. Adjudication Method for the Test Set
- None. As this was a technical validation of eye-tracking metrics and not a diagnostic accuracy study, expert adjudication was not relevant.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No. An MRMC study was not described in this 510(k) submission. The document focuses on demonstrating that changes (software updates, additional output metrics) don't adversely impact the performance established by the predicate device. The primary diagnostic algorithm (BOX score) is stated to be identical to the predicate.
6. Standalone (Algorithm Only) Performance
- Not explicitly highlighted as a separate study in this document. The BOX score algorithm's performance as a standalone diagnostic aid would have been established during the clearance of the predicate device (K212310). This 510(k) asserts that the BOX score algorithm itself is "exactly the same" and its "principles of operation have not changed." The "Output Report Metric Selection for Concussion" is mentioned as a performed activity, but details of a standalone study for the EBX-4.1's diagnostic performance are not provided.
7. Type of Ground Truth Used
- Not explicitly detailed for the EBX-4.1 in this document. For the diagnostic performance (BOX score), the ground truth for the predicate device's studies (K212310) would have been crucial for establishing its effectiveness as an "aid in the diagnosis of concussion." The current submission emphasizes the stability of the BOX score algorithm and its output compared to the predicate.
- For the additional eye-tracking metrics (Pupil Size, Blinks, Disconjugacy, etc.) that are for "contextual clinical information" and "not intended to aid in diagnosis," the "ground truth" for the performance testing cited would be the true physical eye movements/features being measured (e.g., actual blink duration, actual pupil diameter for accuracy). This often involves calibrated phantoms or controlled experimental setups to assess measurement correctness. The "test-retest" reliability used the same healthy individuals observed twice.
8. Sample Size for the Training Set
- Not provided in this document. The document states that the EyeBOX EBX-4.1's diagnostic algorithm (BOX score) and hardware are "exactly the same" as the predicate (EyeBOX EBX-4). Therefore, the training data for the BOX score algorithm would be associated with the development and validation of the K212310 device, not explicitly detailed here.
- The "reference database of uninjured individuals" mentioned for the new output metrics is based on 30 healthy individuals for the test-retest study, but this is the testing set for reliability, not a training set for an algorithm. The actual size of the "normative database" (a "reference database of uninjured individuals") for contextual comparison is not specified as a training set, but as a general reference for physicians.
9. How the Ground Truth for the Training Set Was Established
- Not provided in this document for the reasons stated in point 8. This information would be critical for the original K212310 submission for the EyeBOX EBX-4.
- For the newly added "reference database of uninjured individuals," the "ground truth" is simply that these individuals were uninjured. This likely involved a screening process to confirm their healthy status, but no details are given.
Ask a specific question about this device
(152 days)
QEA
The EyeBOX is intended to measure and analyze eye movements as an aid in the diagnosis of concussion, also known as mild traumatic brain injury (mTBI), within one week of head injury in patients 5 through 67 years of age in conjunction with a standard neurological assessment of concussion.
A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion.
A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion.
Oculogica's EyeBOX model EBX-4 uses eye-tracking technology and a data processing algorithm to detect subtle changes in eye movements resulting from concussion. The eye tracking task takes about 4 minutes to complete and involves watching a video move around the perimeter of an LCD monitor positioned in front of the patient while a high speed near-infrared (IR) camera, positioned just below the screen, records eye movements. The data are then analyzed to produce one or more outcome measures.
The patient sits in front of the device with their head secured in a chinrest about 40 cm from the middle of the display. A face tracking camera, positioned above the screen, assists the patient and operator with proper placement of the patient by providing a second set of eye coordinates in three-dimensional space to calculate the patient's distance from the screen and the patient face's position relative to the display.
The device contains a rechargeable battery, which makes it possible to use without a direct connection to an active power source.
The device has Wi-Fi and Ethernet capabilities, which optionally can provide the user with the ability to send reports to a remote printer and/or file storage location.
No clinical data was collected to prove the device meets acceptance criteria. The submission states, "No new clinical data were collected."
The document states that the EyeBOX Model EBX-4 is substantially equivalent to its predicate device, EyeBOX Model OCL 02.5. The rationale for this substantial equivalence is based on the fact that the EBX-4 has the same intended use, similar technological characteristics, and principles of operation as the predicate device. The changes implemented in the EBX-4 (e.g., removal of integrated headrest/chinrest, updated camera system, electronic component changes) are considered minor and do not raise new questions of safety or effectiveness.
To support this claim of substantial equivalence, Oculogica, Inc. performed performance testing rather than clinical efficacy studies. These tests focused on verifying the functionality and safety of the modified hardware and software.
Here's a breakdown of the requested information based on the provided document:
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table of acceptance criteria with corresponding performance metrics from a clinical study. Instead, the acceptance criteria appear to be addressed through a series of verification and validation activities showing that the device "functioned as intended" and that changes did not "adversely impact device performance."
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Device functions as intended. | "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
Device does not have adverse impact on performance due to changes. | "Results of this comprehensive testing demonstrates these changes do not adversely impact device performance." |
Meets electrical safety standards. | "Electromagnetic emissions, immunity, and safety testing according to IEC 60601-1-2:2014 (4TH EDITION)..." "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
Hardware integrity and functionality. | "Hardware verification testing," "Hardware specification testing." "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
Eye-tracking functionality. | "Eye-tracking functional testing," "Eye-tracking bench testing." "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
Software integrity and functionality. | "Software verification," "User acceptance testing." "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
Ophthalmic safety (light hazard). | "Light hazard testing for ophthalmic devices." "In all instances, the EyeBOX Model EBX-4 functioned as intended." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not applicable. No new clinical data or test set was used for clinical performance evaluation. The performance testing focused on hardware and software verification.
- Data Provenance: Not applicable. No new clinical data was collected.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. No clinical test set with human-established ground truth was used for this submission, as it relied on substantial equivalence and performance testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. No clinical test set was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done. The submission explicitly states, "No new clinical data were collected."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No standalone clinical performance study was done for this submission. The device is referred to as an "aid in the diagnosis of concussion... in conjunction with a standard neurological assessment of concussion," implying a human-in-the-loop use case. However, its performance was assessed through technical verification tests, not a clinical trial.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the technical performance testing, the ground truth would be defined by engineering specifications and expected functional behavior, verified against established standards (e.g., IEC standards for electrical safety, internal specifications for eye-tracking accuracy).
- For the original predicate device, the ground truth for clinical validation would have been established through clinical assessments of concussion (e.g., standard neurological assessments, potentially outcomes data or expert diagnosis), but details of the predicate device's clinical validation are not provided in this specific document. This submission does not provide information about the ground truth for the predicate device.
8. The sample size for the training set
- Not applicable. The document describes a device with "a data processing algorithm," but it does not specify if or how a machine learning model was trained with a dataset. The emphasis is on the EyeBOX system's "principles of operation" remaining unchanged from the predicate device, suggesting the core algorithm (and any associated training) was already established.
9. How the ground truth for the training set was established
- Not applicable. Information regarding a training set or its ground truth establishment is not provided in this 510(k) summary, as the submission relies on substantial equivalence and verification testing of hardware/software changes.
Ask a specific question about this device
(368 days)
QEA
The EYE-SYNC® is intended for recording, viewing, and analyzing eye movements in support of identifying visual tracking impairment in human subjects.
The EYE-SYNC® is intended to record, measure, and analyze eye movements as an aid in the diagnosis of concussion, also known as mild traumatic brain injury (mTBI), within three days of sport-related head injury in patients 17-24 years of age in conjunction with a standard neurological assessment, for use by medical professionals qualified to interpret the results of a concussion assessment examination.
A negative EYE-SYNC® classification corresponds to eye movements that are consistent with a lack of concussion.
A positive EYE-SYNC® classification corresponds to eye movements that may be present in patients with concussion.
The SyncThink EYE-SYNC device is a portable, fully enclosed eye tracking environment with three primary components:
-
- Eye Tracker (head-mounted device) with eye tracking sensor
-
- Eye tracking display
-
- Android Tablet
The eye tracker is a modified Samsung GearVR provided by SensoMotoric Instruments (SMI). It is mounted to the subject's face and held either by hands placed on the side or using a strap. The eye tracking sensor includes two high-speed infrared cameras (for each eye) connected to a visual display for battery, computation, and display. Camera lighting is provided by 12 high-quality Light-emitting Diodes (LEDs) centered at 850 nanometers. Eye gaze tracking is performed using a proprietary implementation of the pupil- corneal reaction method. The eye tracker display is a nonnetworked mobile device that fits within the eye tracking sensor and connects over USB. The display receives eye tracking sensor information for post-processing, manages sensor calibration, provides binocular visual display to the subject, and interfaces with the Android tablet over Bluetooth. Eye gaze tracking and visual display is combined to provide several assessment paradigms to characterize subject eye tracking performance:
- Smooth Pursuit
- · Saccades
- · Vestibular-Ocular Reflex (VOR)
- · VOR Cancellation (VORx)
An EYE-SYNC software app on the display device manages these functions.
The Android tablet is a standard off-the-shelf 9.7" mobile tablet from Samsung with Verizon 4G cellular connectivity. A second EYE-SYNC software app designed to provides an integrative platform for data collected on the HMD eye tracker:
- · Patient, administrator, and records management
- · Eye tracker assessment Bluetooth control
- · Assessment Vestibular/Ocular-Motor Screening (VOMS) self-report tools
- · Eye tracker assessment real-time Bluetooth monitoring
- Eye tracker data log Bluetooth transfer
- Eye tracker assessment analysis using visual synchronization metrics
- · Analysis report generation with visualizations
- · Background diagnostics to verify device health
- · Cloud connectivity for data synchronization
Internal batteries from the eye tracking display and Android tablet provide power for remote use, away from power source. Each device has an EYE-SYNC software app installed to provide the described functionality. EYE-SYNC is provided as a complete system and both apps are managed a single software project with identical version numbers. EYE-SYNC is provided in a standalone carry-case with user manual, strap, cleaning, and charging accessories.
The provided text describes the regulatory clearance of the EYE-SYNC device. Here's a breakdown of the acceptance criteria and study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance | Comments |
---|---|---|
Sensitivity (Detection of Concussion) | 82% (95% CI: 74%, 89%) | Meets the performance demonstrated by the predicate device (80.4%). |
Specificity (Non-detection of Concussion) | 93% (95% CI: 91%, 94%) | Significantly higher than the predicate device (66.1%), indicating better ability to correctly identify non-concussed individuals. |
Negative Predictive Value (NPV) | 98% (95% CI: 97%, 99%) | Higher than the predicate device (94.5%), indicating excellent ability to rule out concussion when the test is negative. |
Positive Predictive Value (PPV) | 56% (95% CI: 48%, 64%) | Higher than the predicate device (31.6%), indicating improved ability to identify concussion when the test is positive, though still with a notable rate of false positives. (Note: PPV and NPV calculated based on observed study prevalence of 10% concussion). |
Test-Retest Reliability (for specific metrics) | SD tangential error: ICC = 0.86 (0.82, 0.90) | |
SD radial error: ICC = 0.78 (0.71, 0.84) | ||
Mean phase error: ICC = 0.83 (0.77, 0.87) | High ICC values indicate strong reliability of the measurements. | |
Electrical Safety | Complied with IEC 60601-1:2012 Ed. 3.0 and IEC 60601-2-57:2011 Ed1.0. | |
Electromagnetic Compatibility (EMC) | Complied with IEC 60601-1-2:2014 Ed 4.0. | |
Software Verification and Validation | Conducted according to FDA Guidance, considered "moderate" level of concern. | |
Cybersecurity | Assessed and documented according to FDA Guidance. | |
Light Safety (Photobiological Safety) | Conformed to EN62471. | |
Biocompatibility | Cytotoxicity, Sensitization, and Irritation tests conducted and passed. | (Specific results not given, but stated to be conducted in accordance with ISO 10993-1). |
2. Sample Size and Data Provenance
- Test Set Sample Size: 1,069 subjects
- Data Provenance: The study was a retrospective analysis of subjects ages 17-24 years actively engaged in competitive athletics. The location/country of origin is not explicitly stated, but the context of FDA clearance for a US market implies the data is relevant to a US population or sufficiently representative thereof. The study involved patient evaluations conducted by healthcare practitioners blinded to EYE-SYNC device output "+3 days from injury," suggesting it was collected progressively for the purpose of the study.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated as a specific number of individual experts.
- Qualifications of Experts: The ground truth for concussion was established using the SCAT-5 clinical reference standard definition of concussion, with evaluations conducted by "a healthcare practitioner blinded to EYE-SYNC device output." While specific qualifications like "radiologist with 10 years of experience" are not provided, "healthcare practitioner" implies a medical professional (e.g., physician, athletic trainer, nurse practitioner) trained in concussion assessment.
4. Adjudication Method for the Test Set
- The text states that the SCAT-5 clinical reference standard definition of concussion was used, with evaluations conducted by a single healthcare practitioner who was blinded to the EYE-SYNC device output. There is no mention of multiple readers or an explicit adjudication method (e.g., 2+1, 3+1). The "ground truth" seems to have been based on this single blinded practitioner's assessment using the SCAT-5.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done to evaluate how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone algorithm's performance in aid of diagnosis. The device's use is explicitly "in conjunction with a standard neurological assessment," implying it's an aid to a human clinician, but the study itself does not measure the human-in-the-loop performance change.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance evaluation was clearly done. The reported sensitivity, specificity, PPV, and NPV are for the EYE-SYNC device's classification algorithm. The healthcare practitioner who established the ground truth was "blinded to EYE-SYNC device output," which supports this being a standalone performance assessment of the algorithm against clinical ground truth.
7. Type of Ground Truth Used
- The ground truth used was expert consensus / clinical reference standard: The SCAT-5 clinical reference standard definition of concussion, as determined by a blinded healthcare practitioner. This is a widely accepted clinical tool for concussion assessment.
8. Sample Size for the Training Set
- The text does not explicitly state the sample size for the training set. The provided data focuses solely on the "validation data analysis" performed on 1,069 subjects, which represents the test set.
9. How Ground Truth for Training Set Was Established
- Since the training set size is not provided, the method for establishing its ground truth is also not explicitly described in this document. It is typically assumed that training data ground truth would be established through similar expert labeling or clinical diagnosis processes, but no details are given here.
Ask a specific question about this device
(66 days)
QEA
The EyeBOX is intended to measure and analyze eye movements as an aid in the diagnosis of concussion, also known as mild traumatic brain injury (mTBI), within one week of head injury in patients 5 through 67 years of age in conjunction with a standard neurological assessment of concussion.
A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion. A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion.
Oculogica's EyeBOX is an eye-tracking device with custom software. The device is comprised of a host PC with integrated touchscreen interface for the operator, eye tracking camera, LCD stimulus screen and head-stabilizing rest (chin rest and forehead rest) for the patient, and data processing algorithm. The data processing algorithm detects subtle changes in eye movements resulting from concussion. The eye tracking task takes about 4 minutes to complete and involves watching a video move around the perimeter of an LCD monitor positioned in front of the patient while a high speed near-infrared (IR) camera records gaze positions 500 times per second. The post-processed data are analyzed automatically to produce one or more outcome measures.
The provided text describes a 510(k) premarket notification for the Oculogica EyeBOX, Model OCL 02.5. This submission is for a modification to a previously cleared device (EyeBOX Model OCL 02). The document emphasizes that the new device has the same intended use, principles of operation, and similar technological characteristics as the predicate, and that the changes do not raise new questions of safety or effectiveness.
Therefore, the performance data presented is primarily focused on demonstrating that the modifications to the device do not adversely impact performance, rather than establishing new acceptance criteria or a comprehensive study proving the device meets those criteria from scratch. The document explicitly states: "comprehensive testing demonstrates that these changes do not adversely impact performance."
Given this context, here's a breakdown of the information requested, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, accuracy) for the EyeBOX Model OCL 02.5, as it is a modification submission aiming to demonstrate that performance is not adversely affected by changes. The reported performance relates to the functionality of the new camera system.
Acceptance Criteria (for new camera system) | Reported Device Performance (for new camera system) |
---|---|
Spatial precision of the new camera met performance requirements. | Bench testing on an artificial eye demonstrated that the spatial precision met performance requirements. |
Step response of the new camera met performance requirements. | Bench testing on an artificial eye demonstrated that the step response met performance requirements. |
Reliability in detecting blinks. | Testing in N=84 human participants demonstrated the new camera and analysis could reliably detect blinks. |
Reliability in detecting gaze position across the range of gaze positions measured by the device. | Testing in N=84 human participants demonstrated the new camera and analysis could reliably detect gaze position across the range of gaze positions measured by the device. |
Electromagnetic emissions and immunity according to IEC 60601-1-2:2014 | Testing performed to IEC 60601-1-2:2014. |
Light hazard protection for ophthalmic instruments according to Z80.36-2016 | Testing performed to Z80.36-2016. |
Software functionality | Software verification and user testing performed. |
2. Sample size used for the test set and the data provenance
- Sample Size for test set: N=84 human participants for testing the new camera's reliability in detecting blinks and gaze position.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It mentions "human participants," implying prospective testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/provided for the described testing. The testing of the new camera's performance (blink and gaze detection) does not appear to involve expert-established ground truth for performance against a diagnosis of concussion. The core diagnostic performance is based on the predicate device's clearance.
4. Adjudication method for the test set
Not applicable/provided for the described testing. The testing focused on technical performance metrics (blink/gaze detection) rather than diagnostic accuracy requiring adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The document describes a device that "measures and analyzes eye movements as an aid in the diagnosis of concussion." It's an AI-based diagnostic aid, but the reported testing for this 510(k) submission is to show that a modified version of the device performs equivalently to the predicate device, not a comparative effectiveness study with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, implicit in the device's function. The EyeBOX relies on a "data processing algorithm" that "analyzes automatically to produce a BOX score." The initial clearance (K191183 for Model OCL 02) would have established the standalone performance of this algorithm. The current submission is for confirming that changes to hardware/software (not the core algorithm's diagnostic output for a given eye movement pattern) do not negatively impact this.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the specific performance testing described for the OCL 02.5 model (new camera system), the ground truth for "reliably detect blinks and gaze position" would be the actual occurrence of blinks and gaze positions, likely established by controlled experimental conditions or other objective measurements. For the overall diagnostic capability of the EyeBOX (as established by the predicate device), the document states it's an "aid in the diagnosis of concussion...in conjunction with a standard neurological assessment of concussion." This implies that the ground truth for concussion diagnosis in the original predicate study would have involved clinical assessment outcomes.
8. The sample size for the training set
Not mentioned in the provided text, as this document refers to a modification of a previously cleared device. The training set information would have been part of the original K191183 submission for EyeBOX Model OCL 02.
9. How the ground truth for the training set was established
Not mentioned in the provided text for the same reason as point 8.
Ask a specific question about this device
(90 days)
QEA
The EyeBOX is intended to measure and analyze eye movements as an aid in the diagnosis of concussion, also known as mild traumatic brain injury (mTBI), within one week of head injury in patients 5 through 67 years of age in conjunction with a standard neurological assessment of concussion.
A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion.
A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion.
Oculogica's EyeBOX is an eye-tracking device with custom software. The device is comprised of a host PC with integrated touchscreen interface for the operator, eye tracking camera, LCD stimulus screen and head-stabilizing rest (chin rest and forehead rest) for the patient, and data processing algorithm. The data processing algorithm detects subtle changes in eye movements resulting from concussion. The eye tracking task takes about 4 minutes to complete and involves watching a video move around the perimeter of an LCD monitor positioned in front of the patient while a high speed near-infrared (IR) camera records gaze positions 500 times per second. The post-processed data are analyzed automatically to produce one or more outcome measures.
The provided text describes the Oculogica EyeBOX, Model OCL 02, a device intended to aid in the diagnosis of concussion (mild traumatic brain injury, mTBI). Since this is a 510(k) submission for a modified device (OCL 02) that is deemed substantially equivalent to a previously cleared device (OCL 01), the document focuses on demonstrating that the changes do not raise new questions of safety or effectiveness. Therefore, detailed information about the original clinical study that proved the device met its acceptance criteria is not explicitly repeated in this 510(k) summary; rather, it refers back to the data from the predicate device (DEN170091).
Based on the information provided, here's what can be extracted and inferred:
1. A table of acceptance criteria and the reported device performance
The provided 510(k) summary for OCL 02 does not explicitly state acceptance criteria or device performance metrics for this specific submission because it's a modification focusing on substantial equivalence. It asserts that the EyeBOX algorithm which processes the eye tracking data and outputs the BOX score is not changed. This implies that the performance established for the predicate device (OCL 01 under DEN170091) is maintained.
To fully answer this, one would typically need access to the original DEN170091 submission. However, we can infer the type of performance metrics that would have been evaluated: sensitivity and specificity for concussion diagnosis. The "Indications for Use" statement gives a clue:
- "A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion." (Implies high negative predictive value/sensitivity for ruling out concussion)
- "A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion." (Implies that a positive result needs to be interpreted in conjunction with other clinical assessments and not as a definitive diagnosis, suggesting emphasis might be on sensitivity rather than high specificity as a standalone diagnostic.)
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The 510(k) for OCL 02 does not provide details about the test set sample size or data provenance, as it refers to the predicate device's data. For the predicate device (OCL 01, DEN170091), a clinical study would have been conducted. This information is not present in the provided text.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is also not present in the provided text for the OCL 02 submission, as it relies on the predicate device's data, which is not detailed here. For concussion diagnosis, the ground truth would typically be established by a consensus of neurologists, sports medicine physicians, or other specialists experienced in concussion diagnosis, likely based on a combination of clinical assessment (e.g., SCAT5, BESS, neurocognitive testing), imaging (if applicable for other purposes), and follow-up.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not present in the provided text. Adjudication methods would have been part of the clinical study design for the predicate device to establish the ground truth for concussion diagnosis.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The EyeBOX is described as an "aid in the diagnosis," not a standalone diagnostic that replaces human assessment. The text states it is used "in conjunction with a standard neurological assessment of concussion." This implies that it is meant to assist clinicians. However, the provided document does not contain information regarding an MRMC comparative effectiveness study or the effect size of human reader improvement with AI assistance. Such a study might have been part of the predicate device's clinical evidence, but it's not detailed here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device provides a "BOX score" and a "classification" (positive or negative). The statement "A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion. A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion." implies the algorithm generates a classification on its own. However, the overriding indication for use is "as an aid in the diagnosis... in conjunction with a standard neurological assessment." This strongly suggests that a standalone, algorithm-only diagnosis is not the intended use or claim, and therefore, a standalone performance study as a definitive diagnosis without human-in-the-loop for the final diagnostic decision would be inconsistent with the stated indication. The algorithm produces a result, but that result is an aid to a human clinician making the final diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The type of ground truth used for the predicate EyeBOX (OCL 01) would almost certainly have been expert clinical consensus of concussion diagnosis, based on standard neurological assessment, symptom questionnaires, and potentially neurocognitive testing. Concussion diagnosis is primarily clinical, so pathology is not a typical ground truth for this condition, and while outcomes data is important, the initial diagnostic ground truth typically relies on expert assessment at the time of diagnosis. This is inferred, as it is not explicitly stated in the provided document.
8. The sample size for the training set
The provided 510(k) summary for OCL 02 focuses on the current device and its substantial equivalence to its predicate. It explicitly states that "The EyeBOX algorithm which processes the eye tracking data and outputs the BOX score is not changed." This suggests the algorithm was developed and trained prior to the OCL 01 submission. The sample size for the training set for the original algorithm development is not provided in this document.
9. How the ground truth for the training set was established
Similar to the test set, the ground truth for the training set would have been established through expert clinical consensus based on comprehensive neurological assessments. This information is not provided in this document but is inferred based on standard practices for clinical AI/ML device development in this field.
Ask a specific question about this device
(371 days)
QEA
The EveBOX is intended to measure and analyze eve movements as an aid in the diagnosis of concussion within one week of head injury in patients 5 through 67 years of age in conjunction with a standard neurological assessment of concussion.
A negative EyeBOX classification may correspond to eye movement that is consistent with a lack of concussion.
A positive EyeBOX classification corresponds to eye movement that may be present in both patients with or without concussion.
The Oculogica EyeBOX system consists of an integrated stand, eye-tracking camera, video stimulus display screen, and computer programmed for analysis of eye movements. It is intended to detect abnormal eye movement that may be related to a concussion. The device measures gaze, calculates a score on a 0-20 scale based on these measurements, and displays an EyeBOX classification based upon whether the scale value is above 10 or not. Scale values of 10 or more yield a positive EyeBOX classification, while scale values under 10 yield a negative EyeBOX classification.
Here's a breakdown of the acceptance criteria and the study proving the EyeBOX device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria (Pre-specified Performance Goals) | Reported Device Performance (Full Cohort) |
---|---|
Lower one-sided 95% confidence limit greater than 70% for sensitivity | Sensitivity: 80.4% (95% CI: 66.1% to (b)(4) - Met the goal as 66.1% > 70% is incorrect, 80.4% is the point estimate. The CI lower bound 66.1% does not meet the goal of >70%. The text explicitly states "These goals were not met in the pivotal clinical study." |
Lower one-sided 95% confidence limit greater than 70% for specificity | Specificity: 66.1% (95% CI: 59.7% to 72.1%) - Did not meet the goal as the lower bound of 59.7% is not >70%. The text explicitly states "These goals were not met in the pivotal clinical study." |
Note: The document explicitly states, "These goals were not met in the pivotal clinical study." However, the FDA's decision to grant De Novo classification indicates that while the prespecified performance goals (specifically the lower bound of the CI for sensitivity and specificity) were not met, the agency found that the probable benefits outweighed the probable risks, especially considering the device's adjunct nature, high Negative Predictive Value (NPV), and the challenges in clinical concussion adjudication.
Study Details
Aspect | Description |
---|---|
Sample Size (Test Set) | 282 subjects with complete data for analysis. Of these, 263 were assessed within 1 week of injury, which is the relevant timeframe for the device's intended use. |
Data Provenance | The text does not explicitly state the country of origin. It indicates it was a "pivotal clinical study," implying a prospective design for the purpose of regulatory approval. The study involved subjects screened and enrolled, making it a prospective study. |
Number of Experts & Qualifications | Initially, a 3-clinician panel was used. Their specific qualifications are not detailed, but they are implied to be clinicians capable of making recommendations on concussion status. The text expresses concern that "84.4% of the first N=199 adjudications had at least one clinician render a recommendation of 'uncertain' for the patient's concussion status," leading to a revised ground truth definition. |
Adjudication Method | Initially, a 3-clinician panel was intended, but due to a high rate of "uncertain" recommendations, the method was revised. The revised standard for classifying concussion status was based on objective criteria from the SCAT3 (Standardized Assessment of Concussion) and SCAT3 Symptom Severity Score (SSS), combined with the presence of Alteration of Consciousness (AOC) or Altered Mental Status (AMS), rather than direct clinician consensus on the final classification. This suggests a shift from direct consensus adjudication to a rule-based algorithm derived from common clinical assessment tools. |
MRMC Comparative Effectiveness Study | Not described. The study evaluated the stand-alone performance of the EyeBOX against a clinical reference standard, without comparing human readers with and without AI assistance. The device is specified as an "assessment aid" and "not intended for standalone detection or diagnostic purposes," implying human-in-the-loop use, but an MRMC study to quantify improvement was not performed or reported here. |
Standalone Performance (Algorithm Only) | Yes, the reported sensitivity, specificity, PPV, and NPV are measures of the EyeBOX algorithm's performance in classifying patients as "Positive" or "Negative" for concussion based on its internal score (0-20 scale), without direct human intervention in the classification itself. This is a standalone performance evaluation against the defined clinical reference standard. |
Type of Ground Truth Used | Clinical Reference Standard: This was a rule-based definition of concussion established for the study. |
Initially: A 3-clinician panel (abandoned due to high uncertainty). | |
Revised: A subject had a concussion if they exhibited (a) AOC/AMS AND SAC 25, OR (b) if they did not exhibit AOC/AMS BUT SAC 32. This method aimed to create a more objective and consistent ground truth in the absence of a "gold standard" for concussion diagnosis. | |
Sample Size for Training Set | Not specified in the provided text. The document describes a "pivotal clinical study" for performance evaluation (test set), but does not explicitly mention the size or nature of a separate training set if machine learning was involved in the algorithm's development. |
Ground Truth for Training Set (How Established) | Not specified. Given the lack of details on a distinct training set and the algorithm description (data collection, preprocessing, underlying model, score calculation), it's possible the "training" involved expert-driven feature engineering and model parameter tuning rather than supervised learning on a large, separately labeled dataset. If any machine learning was used for the "underlying model," the method for establishing ground truth for that training would be crucial, but it is not detailed in this document. |
Ask a specific question about this device
Page 1 of 1