Search Results
Found 1 results
510(k) Data Aggregation
(109 days)
The da Vincil Firefly Imaging System is intended to provide real-time endoscopic visible and near-infrared fluorescence imaging. The da Vinci Firefly Imaging System enables surgeous to perform minimally invasive suggedy using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion. and at least one of the major extra-hepatic bile duct. common bile duct or common hepatic duct), wing near infrared imaging.
Fluorescence imaging of billiary ducts with the da Vinci Firefly Imaging System is intended of care white light and when indicated intraoperady. The device is not intended for standalone use for bliany duct visualization
The Intuitive Surgical da Vinci® Firefly™ Imaging System uses the existing endoscopic imaging system as submitted in K131861 (cleared March 28, 2014) for high definition (HD) visible light and near-infrared fluorescence imaging during minimally invasive surgery. The da Vincie Firefly™ Imaging System utilizes the following existing components of the da Vinci Xi Surgical System:
- 8 mm 0° and 30° endoscopes (PNs 470026 and 470027) .
- Endoscope Controller (PN 372601) .
For near-infrared fluorescence imaging, the Fluorescence Imaging Kit is also required. The kits will be provided to the end user in an identical manner to the current supply and distribution chain for the predicate device. The kit is unchanged from K101077 (February 4, 2011)/K124031 (September 13, 2013).
- Fluorescence Imaging Kit (PN 950156) [Includes IndoCyanine Green (ICG) fluorescence . imaging agent, aqueous solvent, and syringe trays]
This document (K141077) describes the da Vinci® Firefly™ Imaging System, which is intended for real-time endoscopic visible and near-infrared fluorescence imaging during minimally invasive surgery. It allows for visual assessment of vessels, blood flow, related tissue perfusion, and major extra-hepatic bile ducts using near-infrared imaging. It is intended for use with standard white light and intraoperative cholangiography, but not for standalone use for biliary duct visualization.
Here's a breakdown of the acceptance criteria and supporting studies as presented in the document:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly present a table of quantitative acceptance criteria with corresponding performance metrics like sensitivity/specificity/accuracy for the imaging system's clinical performance. Instead, it focuses on demonstrating that the device meets safety and effectiveness requirements through various tests. The acceptance criteria are implicitly that the device performs as intended, meets design requirements, and is substantially equivalent to predicate devices.
| Test Category | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Bench Testing | Device meets dimensional, mechanical, functional, and electrical requirements and specifications. Objective pass/fail criteria were defined. | All final tests PASSED. Verified optical, illumination, mechanical, environmental, and labeling aspects. |
| Animal Testing (Design Validation) | Device meets user needs and intended use as documented in product requirements. Objective pass/fail criteria were defined. | All final tests PASSED. Evaluated Firefly mode performance characteristics in various surgical tasks. |
| Animal Testing (Device Comparison) | Performance is comparable to the predicate device (IS3000 Firefly mode) in basic clinical function and fluorescence image quality. | All final tests PASSED. Side-by-side comparison showed comparable fluorescence image quality. |
| Animal Testing (Surgeon Evaluation) | Clinically acceptable performance allowing for safe and effective surgical use, as assessed by independent external surgeons. | All final tests PASSED. Evaluators assessed various Firefly vision parameters in at least one application. |
| Human Factors and Usability Testing | Safe and effective use by intended users in the intended use environment, addressing critical tasks and use-related risks. | Summative usability validation study conducted with 16 teams. Results provided evidence of safe and effective use. |
2. Sample size used for the test set and the data provenance:
- Bench Testing: Sample sizes "up to 5 units" were used for design verification tests (Optical, Illumination, Mechanical, Environmental, Labeling). Data provenance is not explicitly stated but implies internal laboratory testing.
- Animal Testing:
- Design Validation: Two labs were used, with one canine and one porcine.
- Device Comparison: Two porcines were used.
- Surgeon Evaluation: Four labs were used, utilizing canines or porcines.
- Data provenance is from animal models (canine and porcine) in a simulated clinical setting. This is prospective testing.
- Human Factors and Usability Testing: 16 teams of users (surgeons and operating room staff) were involved. The study was conducted in a simulated operating room, which is a prospective test setting.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document mentions "independent, external surgeon evaluators" were used for the Animal Testing (Surgeon Evaluation) section. Four such surgeons served as evaluators.
- Their specific qualifications (e.g., years of experience) are not detailed other than being "independent, external surgeons".
- For bench and animal testing, "objective pass/fail criteria were defined in the protocol and used," implying internal expert definition, but specific numbers and qualifications are not provided.
- For human factors, 16 "teams of users (surgeons and operating room staff)" participated. Their specific qualifications are not detailed beyond their roles.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not explicitly describe an adjudication method like 2+1 or 3+1 for any of the testing.
- For the surgeon evaluations, it states "All evaluators completed at least one Firefly application to assess the various Firefly vision parameters." This suggests individual assessments, but no process for resolving discrepancies among the four evaluators is detailed. For other tests, "objective pass/fail criteria" were defined, implying a predetermined standard rather than expert adjudication of subjective outcomes.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study involving human readers with AI assistance versus without AI assistance was not mentioned or performed.
- The device described is an imaging system, not an AI diagnostic tool. The "Firefly" system provides near-infrared fluorescence imaging. The "Device Comparison" in animal testing was a side-by-side comparison of image quality between the new device and a predicate device, not a comparison of human reader performance with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device is an imaging system used by surgeons in real-time. It is not an algorithm-only device that operates without human interaction.
- The product description explicitly states, "The device is not intended for standalone use for biliary duct visualization," meaning it's always used with human surgical intervention and standard white light. Therefore, no standalone (algorithm-only) performance testing would be relevant or performed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Bench Testing: Ground truth was based on pre-defined technical specifications and objective pass/fail criteria for dimensional, mechanical, functional, and electrical requirements.
- Animal Testing (Design Validation, Device Comparison, Surgeon Evaluation): Ground truth was based on:
- Pre-defined test procedures and objective pass/fail criteria.
- User needs and intended use documented in product requirements.
- Comparison to the predicate device (IS3000 Firefly mode) for image quality.
- Assessments by independent, external surgeon evaluators regarding clinically acceptable performance.
- While not explicitly stated as "pathology" for, e.g., identifying bile ducts, the clinical assessment by surgeons in an animal model serves as the practical ground truth for the imaging system's utility in visualizing these structures in a surgical context.
- Human Factors and Usability Testing: Ground truth was based on whether the system allowed for "safe and effective use" during "typical workflow scenarios" and "troubleshooting scenarios related to safety-critical tasks" in a simulated environment, as assessed by observing user teams.
8. The sample size for the training set:
- The document describes performance testing for a medical device (an imaging system), not an AI/ML algorithm that requires a "training set." Therefore, the concept of a training set sample size is not applicable to this submission. The tests performed are for design verification and validation of hardware and integrated software.
9. How the ground truth for the training set was established:
- As noted above, this device does not involve a "training set" in the context of AI/ML. All ground truth establishment described (e.g., objective pass/fail criteria, surgeon assessments) pertains to testing and validation, not to the training of a machine learning model.
Ask a specific question about this device
Page 1 of 1