Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K212101
    Date Cleared
    2021-11-23

    (140 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The da Vinci SP Firefly Imaging System is intended to provide real-time endoscopic visible and near-infrared fluorescence imaging. The da Vinci SP Firefly Imaging System enables surgeons to perform minimally invasive surgery using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion, using near-infrared imaging.

    Device Description

    The da Vinci SP Firefly Imaging System is a fully-integrated, adjunct imaging system for the da Vinci SP Surgical System. The da Vinci SP Firefly Imaging System consists of enhanced existing components of the da Vinci SP Surgical System: the endoscope controller, the endoscope, and supporting software functions. The endoscope controller provides a light source, either visible light or a near-infrared (NIR) excitation laser. The endoscope transmits visible light from the endoscope controller to illuminate the surgical site. The near-infrared light is used to fluoresce an intravenously-injected imaging agent, indocyanine green (ICG), which is imaged by the endoscope and displayed at the Surgeon Console.

    AI/ML Overview

    The provided text contains insufficient information regarding the specific acceptance criteria and the detailed study that proves the device meets those criteria. The document states that the da Vinci SP Firefly Imaging System underwent various tests:

    • Bench tests: To verify requirements and risk mitigations, focusing on endoscope design verification, illumination reliability testing, and human factors evaluation.
    • Clinical validation testing: Using an animal model (porcine) to evaluate its safety and effectiveness for use in surgery.

    However, the specific "acceptance criteria" (e.g., performance metrics, thresholds) and the "reported device performance" against these criteria are not explicitly listed in a table format, nor are details provided for most of the requested points.

    Here's a breakdown of what can be inferred or what is explicitly missing:

    Missing Information:

    • A table of acceptance criteria and reported device performance.
    • Sample size used for the test set (e.g., number of cases, number of images).
    • Data provenance for the test set (e.g., country of origin of the data, retrospective or prospective).
    • Number of experts used to establish ground truth for the test set.
    • Qualifications of those experts.
    • Adjudication method for the test set.
    • Whether a multi-reader multi-case (MRMC) comparative effectiveness study was done.
    • Effect size if an MRMC study was done.
    • Whether a standalone (algorithm only without human-in-the-loop performance) was done.
    • The type of ground truth used (beyond "animal model" for clinical validation).
    • Sample size for the training set.
    • How the ground truth for the training set was established.

    Information provided (or can be reasonably inferred):

    Information PointDetails from the Text
    1. Table of Acceptance Criteria & Reported Device PerformanceAcceptance Criteria: Not explicitly stated with specific metrics or thresholds. The text generally mentions "verify requirements and risk mitigations" and "evaluate its safety and effectiveness for use in surgery."Reported Device Performance: Not explicitly stated with specific results. The conclusion is that "no issues of safety or effectiveness" were found, and the device is "substantially equivalent to its predicate device."
    2. Sample size and Data Provenance (Test Set)Sample Size: Not specified (e.g., number of animals, number of procedures).Data Provenance: Animal model (porcine). The study was "clinical validation testing using an animal model," implying prospective data collection within that model.
    3. Number of experts & Qualifications (Ground Truth for Test Set)Not specified. As it's an animal model validation, experts would likely be veterinary surgeons or researchers, but no details are provided.
    4. Adjudication method (Test Set)Not specified.
    5. MRMC Comparative Effectiveness Study & Effect SizeNot mentioned. The studies described are focused on device function, safety, and effectiveness in an animal model, not on human reader performance with or without AI assistance.
    6. Standalone (Algorithm only) PerformanceThe device is an "imaging system" that "enables surgeons to perform minimally invasive surgery" and provides "visual assessment." This implies a human-in-the-loop system where the imaging assists the surgeon. There is no mention of an AI algorithm making independent diagnoses or assessments without human interpretation. Therefore, a standalone (algorithm only) performance study in the context of diagnostic AI is unlikely relevant or described here.
    7. Type of Ground Truth UsedFor the "clinical validation testing," the ground truth was based on observations and assessments within an "animal model (porcine)" to evaluate "safety and effectiveness for use in surgery." This would typically involve direct observation, potentially pathology (if tissues were excised), and functional assessment during the surgical procedures in the animal model. For the bench tests, the ground truth would be based on engineering specifications and design verification.
    8. Sample Size for Training SetNot applicable or not specified. The document describes verification and validation of an imaging system, not a machine learning model that would require a separate training set. The "supporting software functions" mentioned are likely embedded system software rather than adaptable AI requiring a large training dataset as understood in many AI/ML contexts.
    9. How Ground Truth for Training Set EstablishedNot applicable or not specified, for the same reasons as above.

    In summary, the provided text confirms that verification and validation were performed through bench testing and clinical validation in a porcine model, concluding that the device is safe, effective, and substantially equivalent to its predicate. However, it does not offer the granular detail requested concerning specific acceptance criteria, comprehensive performance metrics, expert involvement, or the methodology typically associated with validating an AI/ML algorithm. This is largely because the device in question is an imaging system, not an AI diagnostic/assistive algorithm in the typical sense that would necessitate such detailed ground truth and training/test set breakdowns.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1