Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K211947
    Date Cleared
    2021-11-03

    (133 days)

    Product Code
    Regulation Number
    874.4680
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GlideScope® BFlex™ Single-Use Bronchoscopes are intended to work with a video monitor, in conjunction with non-powered endoscopic accessories and other equipment for endoscopy within the airways and tracheobronchial tree.

    Device Description

    The GlideScope® BFlex™ 2.8 Single-Use Bronchoscope is one component of the GlideScope® BFlex™ Single-Use Bronchoscope System. The system consists of a single-use flexible bronchoscope, a reusable monitor, and a reusable cable. The GlideScope® BFlex™ Single-Use Bronchoscope System is intended to provide real time viewing and recording for a wide range of airway procedures.

    Similar to the predicate GlideScope® BFlex™ 3.8, 5.0, and 5.8 Single-Use Bronchoscopes, the GlideScope® BFlex™ 2.8 Single-Use Bronchoscope is distributed sterile and is for single use only. The GlideScope® BFlex™ bronchoscopes operate with a portable reusable GlideScope video monitor (GVM or Core monitors) for purposes of image display.

    AI/ML Overview

    The provided text describes the 510(k) submission for the GlideScope® BFlex™ 2.8 Single-Use Bronchoscope. It primarily focuses on demonstrating substantial equivalence to predicate devices through design specifications and performance testing, rather than a clinical study involving AI assistance or complex human-in-the-loop evaluations.

    Therefore, many of the requested criteria (e.g., sample size for test set with data provenance, number of experts for ground truth, adjudication method, MRMC comparative effectiveness study, standalone algorithm performance, training set details) are not applicable or not explicitly stated in this document as the device is a bronchoscope, not an AI or imaging diagnostic tool that would typically undergo such rigorous clinical validation studies with ground truth established by experts.

    Here's a breakdown based on the provided information:

    1. A table of acceptance criteria and the reported device performance:

    The document states: "All testing resulted in acceptance criteria passed." However, it does not provide a specific table detailing the acceptance criteria and the quantitative reported performance for each criterion. It lists the types of performance tests conducted.

    Acceptance Criterion TypeResult
    Full System Requirements TestingPassed
    Electrical Safety (AAMI / ANSI ES60601-1, IEC 60601-2-18)Passed
    Electromagnetic Compatibility (IEC 60601-1-2)Passed
    Optical Testing (ISO 8600-1, ISO 8600-3, ISO 8600-4)Passed
    Biocompatibility (ANSI AAMI ISO 10993-1)Passed
    Aging Performance TestingPassed
    Sterile Packaging Integrity TestingPassed
    Cleaning TestingPassed
    Design Validation (Usability Study per IEC 60601-1-6, IEC 62366-1, AAMI HE75, FDA Guidance)Found safe and effective

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not explicitly stated for each test. For the Usability study, it mentions "All participants" were able to operate the device safely, but no specific number of participants is given.
    • Data Provenance: Not specified regarding country of origin. The testing appeared to be internal performance testing of the device itself, rather than data collected from patients.
    • Retrospective or Prospective: Not applicable as these are performance/engineering tests, not a clinical study on patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This is not applicable as the ground truth was established by engineering specifications and recognized electrical, optical, and safety standards, rather than expert consensus on diagnostic imaging. The usability study involved users (likely healthcare professionals), but their role was to test the device's usability, not to establish diagnostic ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    This is not applicable as it's not a diagnostic study requiring adjudication of image interpretation or clinical outcomes.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    This is not applicable. This submission is for a medical device (flexible bronchoscope), not an AI-powered diagnostic system. No MRMC study was mentioned or required.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    This is not applicable. The device is a bronchoscope, not an algorithm.

    7. The type of ground truth used:

    The ground truth or "acceptance criteria" for this device are established by engineering design specifications and compliance with international and national standards for medical electrical equipment, optical performance, biocompatibility, and usability (e.g., AAMI/ANSI, IEC, ISO standards).

    8. The sample size for the training set:

    This is not applicable. This is a hardware medical device; there is no "training set" in the context of an AI algorithm.

    9. How the ground truth for the training set was established:

    This is not applicable as there is no training set for an AI algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1