Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K162973
    Date Cleared
    2017-02-06

    (104 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EndoWrist® Suction Irrigator is intended for use in all endoscopic surgical applications where the compatible da Vinci Surgical System is indicated.

    The EndoWrist Suction Irrigator is designed to be used in conjunction with an Intuitive Surgical da Vinci Surgical System and compatible suction and irrigation sources and tubing sets for delivering fluid to the surgical site and for evacuation and aspiration of fluids. The instrument may also be used for retraction and blunt dissection of tissue.

    Device Description

    The EndoWrist® Suction Irrigator is a single-use, disposable instrument developed for use with the da Vinci Surgical System. The instrument provides the surgeon with the ability to activate suction and irrigation directly from the surgeon console or by depressing buttons on the instrument. Activation from the surgeon console will be controlled through the foot pedals on the surgeon console. The suction and irrigation sources are supplied by conventional devices (suction - canister, hospital line, etc.; and irrigation - closet, compressed air, gravity flow, etc.) that are available in an operating room setting.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Intuitive Surgical EndoWrist® Suction Irrigator. This is a medical device submission that aims to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving that the device meets specific acceptance criteria through extensive studies designed for that purpose. Therefore, many of the requested details about acceptance criteria and study particulars, especially those relevant to AI/algorithm performance, are not present in this document.

    However, based on the type of information provided, I can infer and extract some relevant details:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state "acceptance criteria" in a quantitative table with corresponding "reported device performance" as one might find for an AI algorithm's metrics. Instead, it compares the characteristics of the subject device to its predicate. The acceptance is implicitly based on demonstrating that the new device is "substantially equivalent" to the predicate.

    CharacteristicAcceptance Criteria (Predicate)Reported Device Performance (Subject Device K162973)
    Instrument Shaft OD0.33"0.33"
    Shaft Lumen ID0.17"0.17"
    Shaft Length17.1"22.2"
    Tubing Length13'13'
    Tubing ID/OD0.25"/0.375"0.25"/0.375"
    Irrigation Flow Rate$\geq$ 1.75 L/min$\geq$ 1.75 L/min
    Valve FunctionSliding CylindersSliding Cylinders
    Tip ArticulationPitch/YawPitch/Yaw
    Provided Sterile/Single UseYesYes
    Sterilization MethodEtOEtO

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document describes "bench testing" and "clinical models (animals/cadavers)" for design verification and validation. However, it does not specify sample sizes for these tests. It also does not explicitly state the country of origin or whether the data was retrospective or prospective, though animal/cadaver studies are generally prospective in nature for device testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. The "ground truth" in this context would likely be defined by expert observation and assessment of the device's function during the animal/cadaver studies, but the number or qualifications of these experts are not detailed.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study is mentioned. This is not an AI/algorithm-based device, so MRMC studies in the context of AI assistance are not relevant here.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable, as this is not an AI/algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For "Design Verification (bench testing)", the ground truth would be against engineering specifications and measurable performance metrics. For "Design Validation (animal/cadaver)", the ground truth would be based on observations of safety and efficacy in a simulated clinical setting, likely assessed by experts (e.g., surgeons performing the procedures). This isn't explicitly defined as "expert consensus," "pathology," or "outcomes data" but falls more into the category of expert observation of device function and tissue interaction.

    8. The sample size for the training set

    Not applicable. This is not an AI/machine learning device that requires a training set.

    9. How the ground truth for the training set was established

    Not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1