Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230320
    Date Cleared
    2023-10-26

    (262 days)

    Product Code
    Regulation Number
    874.1820
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    NIM Standard Reinforced EMG Endotracheal Tube; CONTACT Reinforced EMG Endotracheal Tube

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EMG tube is indicated for use where continuous monitoring of the laryngeal musculature is required during surgical procedures. The EMG tube is not intended for postoperative use.

    Device Description

    Medtronic Xomed, Inc.'s NIM™ Standard Reinforced and NIM CONTACT™ Reinforced Endotracheal Tubes are flexible, reinforced endotracheal tubes with inflatable cuffs. The NIM EMG ET Tubes are made from silicone elastomer. Each tube is fitted with electrodes on the main shaft, which are exposed only for a short distance, slightly superior to the cuff. The electrodes are designed to make contact with the laryngeal muscles around the patient's vocal cords to facilitate electromyographic (EMG) monitoring of the laryngeal musculature during surgery when connected to an EMG monitoring device. Both the tube and cuff are manufactured from material that allows the tube to conform readily to the shape of the patient's trachea with minimal trauma to tissues.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Medtronic Xomed, Inc. NIM Standard Reinforced EMG Endotracheal Tube and NIM CONTACT Reinforced EMG Endotracheal Tube. This type of submission aims to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving a de novo device meets specific performance criteria through extensive clinical studies.

    Therefore, the document does not contain the kind of detailed information about acceptance criteria and a study proving a device meets them that one would find for a novel device or an AI/ML-based device seeking de novo authorization or PMA. Specifically, it lacks:

    • A table of acceptance criteria and reported device performance: This document focuses on demonstrating equivalence to predicate devices, not on meeting predefined performance metrics for a novel technology.
    • Sample sizes for test sets, data provenance, number/qualifications of experts, adjudication methods, MRMC studies, standalone performance, or type of ground truth for a test set. These elements are typically found in studies designed to validate the performance of a diagnostic or therapeutic device against a gold standard, especially for AI/ML products.
    • Sample size for the training set or how ground truth for the training set was established: This information is pertinent to machine learning models, which are not the subject of this 510(k) submission.

    What the document does describe regarding "performance data" is limited to nonclinical testing for usability and labeling design validation to support substantial equivalence.

    Here's an analysis of the "Performance Data" section based on the provided text:

    1. Acceptance Criteria and Reported Device Performance:

    The document does not present quantitative acceptance criteria for the device's technical or diagnostic performance in the way one would see for a novel medical device like an AI algorithm. Instead, the "acceptance" is tied to proving substantial equivalence to predicate devices. The performance data presented relates to usability and labeling effectiveness, which are indirectly linked to safety and effectiveness.

    • Usability Goal: "The Anesthesiologist/Nurse Anesthetist shall be able to intubate the patient and maintain the airway without introducing any unrealized use errors or critical tasks."
    • Critical Tasks: "Confirm the Critical tasks were completed without any unacceptable Use Error that may have resulted in unmitigated potential hazards."
    • Risk Mitigations: "To show the risk mitigations were effective in regard to labeling and training."
    • Labeling Design Validation User Need: "The product labeling was to be understandable and provide needed information for proper safe and effective use of the device."

    Reported Performance:
    "The results of these validations with the modified proposed labeling demonstrated that the usability goal was achieved, all critical tasks were completed without introducing any additional unmitigated hazards, the user need and risk control measures were met and the training mitigations proposed were effective."

    2. Sample Sizes and Data Provenance:

    The document mentions "Summative Usability Validation" and "Labeling Design Validation" but does not specify the sample size (e.g., number of users, number of simulated scenarios) used for these nonclinical tests.
    The data provenance is implied to be from internal testing conducted by the manufacturer, Medtronic Xomed, Inc., as part of their 510(k) submission process. It is not retrospective or prospective clinical data in the typical sense. The country of origin of the data is not explicitly stated but is implied to be in the US, given the FDA submission.

    3. Number of Experts and Qualifications:

    The document refers to "Anesthesiologist/Nurse Anesthetist" as the target users for the usability testing. However, it does not specify the number of experts used in the usability or labeling validation studies, nor their specific qualifications (e.g., years of experience). These individuals would have served as the "test subjects" or "evaluators" for the usability study, not necessarily as "experts establishing ground truth" in a diagnostic context.

    4. Adjudication Method:

    The document does not mention any adjudication method. This is not relevant for the type of usability and labeling validation studies described. Adjudication is typically used in studies where multiple human readers or algorithms produce interpretations that need to be reconciled to establish a ground truth (e.g., for diagnostic accuracy studies).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No MRMC study was conducted or reported. The device is not an AI/ML diagnostic aid that assists human readers. It is an endotracheal tube with EMG monitoring capabilities. The "performance data" is purely for usability and labeling effectiveness, not comparative diagnostic accuracy for human readers with or without an AI.

    6. Standalone Performance (Algorithm Only):

    This section is not applicable. The device is a physical medical device (an endotracheal tube), not a software algorithm. Therefore, there is no "standalone performance" in the context of an algorithm's output.

    7. Type of Ground Truth Used:

    For the usability and labeling studies, the "ground truth" was essentially defined by the successful completion of critical tasks without unacceptable use errors and the understandability of the labeling, as determined by the study design and its evaluators. This is not "expert consensus," "pathology," or "outcomes data" in the clinical diagnostic sense. It's about demonstrating safe and effective interaction with the device.

    8. Sample Size for the Training Set:

    This is not applicable. The device is a physical product, not an AI/ML model that requires a training set.

    9. How Ground Truth for the Training Set Was Established:

    This is not applicable for the same reason as above.

    In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence to predicate devices, with performance data limited to nonclinical usability and labeling validation. It does not provide the details typically found in studies for novel diagnostic or AI/ML devices that aim to prove specific performance metrics against a clinical ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1