Search Results
Found 1 results
510(k) Data Aggregation
(680 days)
BRAINPORT V100 DEVICE
The BrainPort V100 is an oral electronic vision aid that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition as an adjunctive device to other assistive methods such as the white cane or a guide dog.
The BrainPort V100 is an electronic assistive aid that translates images of objects captured by a digital camera into electro-tactile signals that are presented to the user's tongue. With training, users are able to use the electrotactile signals to perceive shape, size, location and motion of objects. The BrainPort V100 is intended to augment, rather than replace, other assistive technology such as the white cane or guide dog. The BrainPort V100 is not used to diagnose or treat the underlying condition that leads to the user's visual impairment. The BrainPort V100 is intended for prescription use only and for single patient use. The BrainPort V 100 consists of three components: the headset, the controller (also known as the handset), and the battery charger.
Acceptance Criteria and Device Performance for BrainPort V100
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria | Reported Device Performance |
---|---|---|
Safety | Acceptable rate (500 charge cycles. | |
Software Verification | Software verification, validation, and hazard analysis performed according to FDA guidance for moderate level-of-concern software. | Software V&V included information on hazards, requirements, development, traceability, and anomalies per FDA guidance. (Note: data exchange is unencrypted, but no PII is stored). |
Moisture Ingress | Characterize internal moisture penetration, ensuring no penetration to electronic components. | Testing showed very limited moisture ingress and no penetration to the electronic component area after 690 hours of continuous immersion (simulating 6.5 years of use). IOD is encapsulated in epoxy. Device is IP20 (controller/headset) and IPX4 (IOD). |
IOD Durability | Durability of IOD and electrodes under continuous stimulation. | IOD electrodes continuously stimulated for 8 days (simulating 38 months of normal use). No tarnishing or pitting observed under visual or 120x optical magnification. |
MR Compatibility | If not MR compatible, clearly label and warn users. | Device is "MR Unsafe" and labeling clearly states it has not been evaluated, should be removed before entering an MR environment, and that safety is unknown due to potential heating, migration, or artifact. |
Study Proving Device Acceptance Criteria
The BrainPort V100's acceptance criteria were primarily proven through a one-year clinical study, augmented by additional adverse event data collected outside the United States, and supported by various bench tests.
Clinical Study Elements:
-
Sample Size used for the Test Set and Data Provenance:
- Test Set Sample Size: 75 enrolled subjects (74 completed training, 57 completers for primary endpoints).
- Data Provenance: Prospective, single-arm, open-label clinical study conducted at 7 sites in the U.S. and Canada.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
- The document does not explicitly state the number of experts used to establish a ground truth in the traditional sense of consensus reading for diagnostic imaging.
- Instead, the "ground truth" for effectiveness in object recognition and mobility was established by the subjects' observed performance on standardized tasks (e.g., correctly identifying and touching objects, ambulating to signs).
- The study involved a test administrator for the Object Recognition test and a similar setup for the Mobility Test, who would objectively observe and record subject performance. While their specific qualifications are not detailed beyond "test administrator," they would be trained in conducting the assessments.
- Oral health exams were conducted by unspecified healthcare professionals as part of quarterly follow-up.
-
Adjudication Method for the Test Set:
- Not applicable in the typical sense for image or clinical data interpretation. The effectiveness endpoints (object recognition, mobility) were based on direct observation and recording of subject performance against defined criteria (e.g., touching the correct object, reaching the correct sign). Adverse events were determined by the investigator.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is an assistive technology, not an AI-powered diagnostic tool. The goal was to demonstrate if the device itself, when used by profoundly blind individuals, improved their ability in orientation, mobility, and object recognition, rather than evaluating human reader performance with or without AI assistance. The study assessed the users' absolute performance with the device after training, not their improvement over a baseline without the device (though they are profoundly blind, implying severe functional limitation without assistive aids).
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No, an algorithm-only standalone performance was not done. The BrainPort V100 is designed as a human-in-the-loop assistive device. Its function is to convert visual information into tactile signals for human interpretation via the tongue. Therefore, evaluating the algorithm's performance without a human user is not relevant to its intended use or clinical effectiveness.
-
The Type of Ground Truth Used:
- Behavioral Performance and Clinical Observation: For effectiveness, the ground truth was the observed, objective behavioral performance of the profoundly blind subjects on defined tasks (e.g., correctly identifying an object, successfully navigating to a sign). For safety, the ground truth was clinical assessment of adverse events by investigators and oral health examinations, alongside subject-reported adverse events.
-
The Sample Size for the Training Set:
- The document describes a "training phase" for the clinical study subjects, where 74 out of 75 enrolled subjects completed 2-3 days (10 hours) of supervised training. This initial training segment is part of preparing the subjects to use the device for the study, rather than training an algorithm.
- The document does not specify a "training set" sample size for the device's internal algorithms (e.g., for camera processing or tactile signal generation). The device's core functionality (translating luminance into electrotactile signals) appears to be based on pre-programmed logic rather than machine learning models that require a distinct training set.
-
How the Ground Truth for the Training Set Was Established:
- Given that the device's algorithms do not appear to be machine learning-based with a "training set" in the traditional sense, this question is not directly applicable.
- The "ground truth" for the human training (i.e., for the users and trainers) was established through best practices in blindness rehabilitation and education. User training protocols were developed by Wicab and validated in the study. Trainers had relevant experience (CLVS, COMS, TVI) and were further trained by Wicab according to specific procedures. The effectiveness of this human training was validated by end-user success in the objective performance tasks.
Ask a specific question about this device
Page 1 of 1