Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K152851
    Device Name
    BrainPort V100
    Manufacturer
    Date Cleared
    2015-12-24

    (86 days)

    Product Code
    Regulation Number
    886.5905
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    WICAB, INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BrainPort V100 is an oral electronic vision aid that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition as an adjunctive device to other assistive methods such as the white cane or a guide dog.

    Device Description

    The BrainPort V100 design and components are the same as the previously granted BrainPort V100; the device continues to consist of the headset, controller (handset), intra-oral device (IOD), and battery charger. The camera unit in the headset captures the viewed scene as a digital image and forwards that image to the controller for processing. The IOD presents stimulation patterns representative of the camera image to the user's tongue. Same as in DEN130039, the BrainPort V100 is a fully wearable, battery operated device with no physical connections to external equipment during normal operation. The device includes a means for a sighted individual (e.g., instructor) to remotely view the camera and IOD images to assist in training through its vRemote software program.

    AI/ML Overview

    The provided text describes a 510(k) summary for the BrainPort V100, which is primarily focused on demonstrating substantial equivalence to a previously cleared predicate device (DEN130039). The submission highlights minor modifications related to cleaning/disinfection procedures and a software update, and verifies that these changes do not alter fundamental device performance or safety.

    Therefore, the study described does not involve a traditional clinical performance study with acceptance criteria in the sense of accuracy, sensitivity, or specificity for a diagnostic device. Instead, the "acceptance criteria" and "device performance" relate to the validation of the changes made to the device and ensuring they meet established safety and functionality standards.

    Here's an attempt to structure the information based on your request, understanding that "acceptance criteria" and "device performance" in this context will refer to the validation of the modifications rather than a clinical efficacy study.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Criteria/Standard Adhered ToReported Device Performance/Result
    Cleaning/Disinfection ValidationAAMI TIR12:2010, AAMI TIR30:2011, ISO 17664:2004, ANSI/AAMI ST81:2004(R)2010, ANSI/AAMI ST58:2013 guidelines. No reduction in electrode functionality after cleaning/disinfection.All results were passing, validating the cleaning and disinfection procedures. Performance testing verified no reduction in electrode functionality.
    Electrical Safety/Electromagnetic CompatibilityIEC 60601-1, IEC 60601-1-2, IEC 60601-1-11Results were passing. No changes in electronic hardware/technology compared to the predicate device.
    Biocompatibility- (Implicitly, established and low risk)Established as low risk. No changes to device materials compared to the predicate.
    SoftwareSoftware verification and validation testing for the minor update.Results demonstrated that the software was appropriate for release, performing as intended.
    Overall Substantial EquivalenceThe modified BrainPort V100 maintains the same intended use, indications for use, and very similar technological characteristics and principles of operation as the predicate device (DEN130039), with no new safety or effectiveness questions raised by the minor changes.The BrainPort V100 is substantially equivalent to its predicate device.

    2. Sample Size Used for the Test Set and the Data Provenance

    • Sample Size: The document does not specify a "test set" in the traditional sense of patient data.
      • For Cleaning/Disinfection Validation: These studies typically involve a defined number of device units (or components) subjected to multiple cleaning/disinfection cycles. The specific sample size is not mentioned, but it would have been sufficient to meet the statistical requirements of the cited standards.
      • For Electrical Safety, Biocompatibility, and Software Validation: These typically involve testing of device prototypes or software builds. Specific sample sizes are not provided but would be based on engineering validation practices.
    • Data Provenance: Not applicable in the context of clinical data. The validation activities are likely conducted in laboratory settings or by independent testing facilities according to regulatory standards. No country of origin for clinical data is mentioned as it's not a clinical study. All studies appear to be prospective in nature, as they involve actively conducting tests on the modified device.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • This is not applicable to the type of studies described. "Ground truth" in this context refers to established engineering and regulatory standards (e.g., AAMI, ISO, IEC) and the expertise of professionals in validation engineering, microbiology, electrical engineering, and software testing. The document states that cleaning/disinfection validation was conducted by an "independent laboratory."

    4. Adjudication Method for the Test Set

    • Not applicable. Adjudication methods like 2+1 or 3+1 are used for human expert review of clinical cases. The studies described are technical validations against established standards.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The BrainPort V100 is an "oral electronic vision aid" that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition. It's not an AI-assisted diagnostic imaging device that requires human "readers" in the conventional sense. The product does include "vRemote software program" to assist in training by allowing a sighted individual to remotely view camera and IOD images, but this is for training support, not for AI-assisted diagnostic interpretation.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not applicable as described. The BrainPort V100 is a device for sensory substitution, not a standalone diagnostic algorithm. The "algorithm" (processing of visual input to electro-tactile patterns) is an integral part of the device's function, inherently designed for human interaction (the user's tongue). The software validation ensures the internal algorithms perform as intended.

    7. The Type of Ground Truth Used

    • The "ground truth" for the validation studies was primarily established regulatory standards and engineering specifications.
      • For Cleaning/Disinfection: Microbiological standards for reduction of pathogens, chemical compatibility, and maintenance of device functionality.
      • For Electrical Safety: Compliance with specified voltage, current, and electromagnetic interference limits.
      • For Software: Verification against software requirements and design specifications.
      • For Biocompatibility: Standards for material safety in contact with biological tissues.

    8. The Sample Size for the Training Set

    • Not applicable. This is not a machine learning or AI-driven diagnostic device that typically involves a "training set" of data. The device itself is the product undergoing technical validation.

    9. How the Ground Truth for the Training Set was Established

    • Not applicable as there is no "training set" in the context of this device and its regulatory submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN130039
    Manufacturer
    Date Cleared
    2015-06-18

    (680 days)

    Product Code
    Regulation Number
    886.5905
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    WICAB, INC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BrainPort V100 is an oral electronic vision aid that provides electro-tactile stimulation to aid profoundly blind patients in orientation, mobility, and object recognition as an adjunctive device to other assistive methods such as the white cane or a guide dog.

    Device Description

    The BrainPort V100 is an electronic assistive aid that translates images of objects captured by a digital camera into electro-tactile signals that are presented to the user's tongue. With training, users are able to use the electrotactile signals to perceive shape, size, location and motion of objects. The BrainPort V100 is intended to augment, rather than replace, other assistive technology such as the white cane or guide dog. The BrainPort V100 is not used to diagnose or treat the underlying condition that leads to the user's visual impairment. The BrainPort V100 is intended for prescription use only and for single patient use. The BrainPort V 100 consists of three components: the headset, the controller (also known as the handset), and the battery charger.

    AI/ML Overview

    Acceptance Criteria and Device Performance for BrainPort V100

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance
    SafetyAcceptable rate (500 charge cycles.
    Software VerificationSoftware verification, validation, and hazard analysis performed according to FDA guidance for moderate level-of-concern software.Software V&V included information on hazards, requirements, development, traceability, and anomalies per FDA guidance. (Note: data exchange is unencrypted, but no PII is stored).
    Moisture IngressCharacterize internal moisture penetration, ensuring no penetration to electronic components.Testing showed very limited moisture ingress and no penetration to the electronic component area after 690 hours of continuous immersion (simulating 6.5 years of use). IOD is encapsulated in epoxy. Device is IP20 (controller/headset) and IPX4 (IOD).
    IOD DurabilityDurability of IOD and electrodes under continuous stimulation.IOD electrodes continuously stimulated for 8 days (simulating 38 months of normal use). No tarnishing or pitting observed under visual or 120x optical magnification.
    MR CompatibilityIf not MR compatible, clearly label and warn users.Device is "MR Unsafe" and labeling clearly states it has not been evaluated, should be removed before entering an MR environment, and that safety is unknown due to potential heating, migration, or artifact.

    Study Proving Device Acceptance Criteria

    The BrainPort V100's acceptance criteria were primarily proven through a one-year clinical study, augmented by additional adverse event data collected outside the United States, and supported by various bench tests.

    Clinical Study Elements:

    1. Sample Size used for the Test Set and Data Provenance:

      • Test Set Sample Size: 75 enrolled subjects (74 completed training, 57 completers for primary endpoints).
      • Data Provenance: Prospective, single-arm, open-label clinical study conducted at 7 sites in the U.S. and Canada.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:

      • The document does not explicitly state the number of experts used to establish a ground truth in the traditional sense of consensus reading for diagnostic imaging.
      • Instead, the "ground truth" for effectiveness in object recognition and mobility was established by the subjects' observed performance on standardized tasks (e.g., correctly identifying and touching objects, ambulating to signs).
      • The study involved a test administrator for the Object Recognition test and a similar setup for the Mobility Test, who would objectively observe and record subject performance. While their specific qualifications are not detailed beyond "test administrator," they would be trained in conducting the assessments.
      • Oral health exams were conducted by unspecified healthcare professionals as part of quarterly follow-up.
    3. Adjudication Method for the Test Set:

      • Not applicable in the typical sense for image or clinical data interpretation. The effectiveness endpoints (object recognition, mobility) were based on direct observation and recording of subject performance against defined criteria (e.g., touching the correct object, reaching the correct sign). Adverse events were determined by the investigator.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. This device is an assistive technology, not an AI-powered diagnostic tool. The goal was to demonstrate if the device itself, when used by profoundly blind individuals, improved their ability in orientation, mobility, and object recognition, rather than evaluating human reader performance with or without AI assistance. The study assessed the users' absolute performance with the device after training, not their improvement over a baseline without the device (though they are profoundly blind, implying severe functional limitation without assistive aids).
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • No, an algorithm-only standalone performance was not done. The BrainPort V100 is designed as a human-in-the-loop assistive device. Its function is to convert visual information into tactile signals for human interpretation via the tongue. Therefore, evaluating the algorithm's performance without a human user is not relevant to its intended use or clinical effectiveness.
    6. The Type of Ground Truth Used:

      • Behavioral Performance and Clinical Observation: For effectiveness, the ground truth was the observed, objective behavioral performance of the profoundly blind subjects on defined tasks (e.g., correctly identifying an object, successfully navigating to a sign). For safety, the ground truth was clinical assessment of adverse events by investigators and oral health examinations, alongside subject-reported adverse events.
    7. The Sample Size for the Training Set:

      • The document describes a "training phase" for the clinical study subjects, where 74 out of 75 enrolled subjects completed 2-3 days (10 hours) of supervised training. This initial training segment is part of preparing the subjects to use the device for the study, rather than training an algorithm.
      • The document does not specify a "training set" sample size for the device's internal algorithms (e.g., for camera processing or tactile signal generation). The device's core functionality (translating luminance into electrotactile signals) appears to be based on pre-programmed logic rather than machine learning models that require a distinct training set.
    8. How the Ground Truth for the Training Set Was Established:

      • Given that the device's algorithms do not appear to be machine learning-based with a "training set" in the traditional sense, this question is not directly applicable.
      • The "ground truth" for the human training (i.e., for the users and trainers) was established through best practices in blindness rehabilitation and education. User training protocols were developed by Wicab and validated in the study. Trainers had relevant experience (CLVS, COMS, TVI) and were further trained by Wicab according to specific procedures. The effectiveness of this human training was validated by end-user success in the objective performance tasks.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1