Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K131205
    Date Cleared
    2013-08-09

    (102 days)

    Product Code
    Regulation Number
    878.4400
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ArthroCare Head and Neck Coblation Wand is indicated for ablation, resection, and coagulation of soft tissue and hemostasis of blood vessels in otorhinolaryngology (ENT) surgery including:

    • . Cysts
    • . Tumors
    • . Head, neck, and oral surgery
    • . Neck mass

    The ArthroCare Head and Neck Wand is designed to be used only with the ArthroCare ENT Coblator II (CII) Surgery System Controller and ArthroCare Flow Control Unit.

    Device Description

    The ArthroCare Head and Neck Coblation Wand is a bipolar, single use, electrosurgical device designed for use with the ArthroCare Coblator II System Controller for specific head and neck indications in otorhinolaryngology (ENT) procedures.

    AI/ML Overview

    This submission describes a medical device, the ArthroCare Head and Neck Coblation Wand, seeking 510(k) clearance based on substantial equivalence to a predicate device. As such, the concept of "acceptance criteria" and "device performance" as might be seen for a new AI/software-based device performing diagnostic tasks is not directly applicable in the same way.

    Instead, the submission focuses on proving that the new device performs comparably to an existing, legally marketed predicate device. The "acceptance criteria" here are implicitly that the new device meets its design and performance specifications and functions comparably to the predicate device for its intended use. The "reported device performance" is the outcome of the bench and animal studies, confirming this comparability.

    Here's an analysis based on the provided text, addressing your points where applicable:

    1. Table of Acceptance Criteria and Reported Device Performance

    Given that this is a 510(k) for a physical electrosurgical device, the acceptance criteria are not presented in a quantitative table format suitable for diagnostic AI. Instead, the criteria are functional and comparative.

    Acceptance Criterion (Implicit)Reported Device Performance
    Meets all design and performance specifications"The Design Verification test results demonstrate that the Head and Neck Wand meets all design and performance specifications..."
    Performs comparably to the predicate device (Gyrus Dissector Plasma Knife) for ablation, resection, coagulation, and hemostasis of soft tissue in ENT procedures."Bench testing was performed to evaluate the performance of the Head and Neck Wand compared to the predicate Gyrus Dissector Plasma Knife. ... [The device] performs comparably to the predicate device."

    "A Pre-Clinical study was conducted in sheep to evaluate the tissue effects using the Head and Neck Wand compared to the predicate Gyrus Dissector Plasma Knife. Based on the test results, the proposed device is substantially equivalent to the predicate." |
    | No new questions of safety or effectiveness | "All testing conducted demonstrates that the ArthroCare Head and Neck Coblation Wand performs as intended when used in accordance with its labeling. The ArthroCare Head and Neck Coblation Wand is substantially equivalent to the predicate Gyrus Dissector Plasma Knife in terms of design, principle of operation, and indications for use and raises no new questions of safety or effectiveness." |


    The subsequent points (2-9) are very specific to studies involving data-driven algorithms (like AI/ML medical devices) and are generally not applicable to the 510(k) submission of a physical electrosurgical device like the ArthroCare Head and Neck Coblation Wand. The submission explicitly states "No clinical data are included in this submission," further indicating that these types of analyses were not performed. However, I will address them to clarify why they are not relevant in this context.

    2. Sample size used for the test set and the data provenance

    • Not Applicable. This device is a physical electrosurgical tool. The "test sets" would refer to the samples and conditions used in bench and animal testing, not a dataset for an algorithm.
      • Bench Testing: The submission mentions "Design Verification test results" but does not specify sample sizes for materials or number of tests.
      • Animal Study: "A Pre-Clinical study was conducted in sheep." The number of animals used is not specified.
      • Data Provenance: Not applicable in the context of data for an algorithm. The studies were conducted by the manufacturer (ArthroCare Corporation).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not Applicable. Ground truth in this context typically refers to expert labels for diagnostic images or clinical outcomes for AI models. For a physical device, performance is measured against engineering specifications and direct observation of tissue effects by veterinary or surgical experts during animal studies (though not explicitly detailed here). The submission does not mention external experts establishing ground truth for testing.

    4. Adjudication method for the test set

    • Not Applicable. Adjudication methods like 2+1 or 3+1 are used for resolving discrepancies in expert labels for AI ground truth. This is not relevant for a physical device performance study.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. An MRMC study is specific to evaluating diagnostic systems, especially with AI assistance. This submission does not involve AI, diagnostic tasks, or human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not Applicable. This is a physical electrosurgical device, not an algorithm.

    7. The type of ground truth used

    • Not Applicable in the AI/ML sense. For bench testing, the "ground truth" would be the engineering specifications and expected physical properties or outcomes. For the animal study, the "ground truth" would be the observed tissue effects and their histopathological assessment by veterinary pathologists, compared between the device and the predicate. The submission states the animal study evaluated "tissue effects."

    8. The sample size for the training set

    • Not Applicable. This device does not use an algorithm requiring a training set.

    9. How the ground truth for the training set was established

    • Not Applicable. This device does not use an algorithm requiring a training set or ground truth establishment in this manner.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1