(86 days)
The BrainPort V100 is an oral electronic vision aid that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition as an adjunctive device to other assistive methods such as the white cane or a guide dog.
The BrainPort V100 design and components are the same as the previously granted BrainPort V100; the device continues to consist of the headset, controller (handset), intra-oral device (IOD), and battery charger. The camera unit in the headset captures the viewed scene as a digital image and forwards that image to the controller for processing. The IOD presents stimulation patterns representative of the camera image to the user's tongue. Same as in DEN130039, the BrainPort V100 is a fully wearable, battery operated device with no physical connections to external equipment during normal operation. The device includes a means for a sighted individual (e.g., instructor) to remotely view the camera and IOD images to assist in training through its vRemote software program.
The provided text describes a 510(k) summary for the BrainPort V100, which is primarily focused on demonstrating substantial equivalence to a previously cleared predicate device (DEN130039). The submission highlights minor modifications related to cleaning/disinfection procedures and a software update, and verifies that these changes do not alter fundamental device performance or safety.
Therefore, the study described does not involve a traditional clinical performance study with acceptance criteria in the sense of accuracy, sensitivity, or specificity for a diagnostic device. Instead, the "acceptance criteria" and "device performance" relate to the validation of the changes made to the device and ensuring they meet established safety and functionality standards.
Here's an attempt to structure the information based on your request, understanding that "acceptance criteria" and "device performance" in this context will refer to the validation of the modifications rather than a clinical efficacy study.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria/Standard Adhered To | Reported Device Performance/Result |
---|---|---|
Cleaning/Disinfection Validation | AAMI TIR12:2010, AAMI TIR30:2011, ISO 17664:2004, ANSI/AAMI ST81:2004(R)2010, ANSI/AAMI ST58:2013 guidelines. No reduction in electrode functionality after cleaning/disinfection. | All results were passing, validating the cleaning and disinfection procedures. Performance testing verified no reduction in electrode functionality. |
Electrical Safety/Electromagnetic Compatibility | IEC 60601-1, IEC 60601-1-2, IEC 60601-1-11 | Results were passing. No changes in electronic hardware/technology compared to the predicate device. |
Biocompatibility | - (Implicitly, established and low risk) | Established as low risk. No changes to device materials compared to the predicate. |
Software | Software verification and validation testing for the minor update. | Results demonstrated that the software was appropriate for release, performing as intended. |
Overall Substantial Equivalence | The modified BrainPort V100 maintains the same intended use, indications for use, and very similar technological characteristics and principles of operation as the predicate device (DEN130039), with no new safety or effectiveness questions raised by the minor changes. | The BrainPort V100 is substantially equivalent to its predicate device. |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size: The document does not specify a "test set" in the traditional sense of patient data.
- For Cleaning/Disinfection Validation: These studies typically involve a defined number of device units (or components) subjected to multiple cleaning/disinfection cycles. The specific sample size is not mentioned, but it would have been sufficient to meet the statistical requirements of the cited standards.
- For Electrical Safety, Biocompatibility, and Software Validation: These typically involve testing of device prototypes or software builds. Specific sample sizes are not provided but would be based on engineering validation practices.
- Data Provenance: Not applicable in the context of clinical data. The validation activities are likely conducted in laboratory settings or by independent testing facilities according to regulatory standards. No country of origin for clinical data is mentioned as it's not a clinical study. All studies appear to be prospective in nature, as they involve actively conducting tests on the modified device.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- This is not applicable to the type of studies described. "Ground truth" in this context refers to established engineering and regulatory standards (e.g., AAMI, ISO, IEC) and the expertise of professionals in validation engineering, microbiology, electrical engineering, and software testing. The document states that cleaning/disinfection validation was conducted by an "independent laboratory."
4. Adjudication Method for the Test Set
- Not applicable. Adjudication methods like 2+1 or 3+1 are used for human expert review of clinical cases. The studies described are technical validations against established standards.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The BrainPort V100 is an "oral electronic vision aid" that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition. It's not an AI-assisted diagnostic imaging device that requires human "readers" in the conventional sense. The product does include "vRemote software program" to assist in training by allowing a sighted individual to remotely view camera and IOD images, but this is for training support, not for AI-assisted diagnostic interpretation.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not applicable as described. The BrainPort V100 is a device for sensory substitution, not a standalone diagnostic algorithm. The "algorithm" (processing of visual input to electro-tactile patterns) is an integral part of the device's function, inherently designed for human interaction (the user's tongue). The software validation ensures the internal algorithms perform as intended.
7. The Type of Ground Truth Used
- The "ground truth" for the validation studies was primarily established regulatory standards and engineering specifications.
- For Cleaning/Disinfection: Microbiological standards for reduction of pathogens, chemical compatibility, and maintenance of device functionality.
- For Electrical Safety: Compliance with specified voltage, current, and electromagnetic interference limits.
- For Software: Verification against software requirements and design specifications.
- For Biocompatibility: Standards for material safety in contact with biological tissues.
8. The Sample Size for the Training Set
- Not applicable. This is not a machine learning or AI-driven diagnostic device that typically involves a "training set" of data. The device itself is the product undergoing technical validation.
9. How the Ground Truth for the Training Set was Established
- Not applicable as there is no "training set" in the context of this device and its regulatory submission.
§ 886.5905 Oral electronic vision aid.
(a)
Identification. An oral electronic vision aid is a battery-powered prescription device that contains an electrode stimulation array to generate electrotactile stimulation patterns that are derived from digital object images captured by a camera. It is intended to aid profoundly blind patients in orientation, mobility, and object recognition as an adjunctive device to other assistive methods such as a white cane or a guide dog.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Clinical performance testing must demonstrate an acceptable adverse event profile, including adverse events involving the mouth, tongue, and gums and demonstrate the effect of the stimulation to provide clinically meaningful outcomes. The clinical performance testing must also investigate the anticipated conditions of use, including potential use error, intended environment of use, and duration of use.
(2) Non-clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use, including simulated moisture ingress, device durability, and battery reliability.
(3) Software verification, validation, and hazard analysis must be performed.
(4) Analysis/testing must validate electromagnetic compatibility.
(5) Analysis/testing must validate electrical safety.
(6) Analysis/testing must assess and validate wireless coexistence concerns.
(7) Any elements of the device that contact the patient must be demonstrated to be biocompatible.
(8) Training must include elements to ensure that the healthcare provider and user can identify the safe environments for device use, use all safety features of the device, and operate the device in the intended environment of use.
(9) Labeling for the trainer and user must include a summary of the clinical testing including adverse events encountered under use conditions, summary of study outcomes and endpoints, and information pertinent to use of the device including the conditions under which the device was studied (
e.g., level of supervision or assistance, and environment of use).