Search Results
Found 1 results
510(k) Data Aggregation
(347 days)
SmartSurgN Visualization System
The SmartSurgN Visualization System is intended to provide realtime endoscopic visible (VIS) and real-time nearinfrared (NIR) fluorescence imaging. Upon intravenous administration and use of an ICC consistent with its approved label the SmartSurgN Visualization System enables surgeons to perform minimally invasive surgery using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion, and at least one of the major extra-hepatic bile duct, common bile duct and common hepatic duct) using near-infrared imaging. Fluorescence imaging of biliary ducts with the SmartSurgN Visualization System is intended for use with standard of care white light and, when indicated, intraoperative cholangiography. The device is not intended for standalone use for biliary duct visualization.
The SmartSurgN Visualization System is designed to provide real-time endoscopic visible (VIS) and real-time near-infrared (NIR) fluorescence imaging during minimally invasive surgery. The SmartSurgN Visualization System is comprised of the following main components: EyeRSurgn Console with Camera Head IRLightSurgN Light Source 10mm ICG Laparoscope, 0° or 30° The SmartSurgN Visualization System enables surgeons to perform minimally invasive surgery using standard endoscopic visible light as well as visual assessment of vessels, blood flow and related tissue perfusion, and at least one of the major extra-hepatic bile ducts (cystic duct, common bile duct and common hepatic duct), using near-infrared imaging. Fluorescence imaging of biliary ducts with the SmartSurgN Visualization System is intended for use with standard of care white light and, when indicated, intraoperative cholangiography. The device is not intended for standalone use for biliary duct visualization. During the use of the SmartSurgN Visualization System, the IRLightSurgN provides the light source for illumination of the surgical site. The IRLightSurgN is capable of outputting light in the visible light spectrum as well as in the near infrared spectrum. The user selects the image capture mode (Regular, IRMax, IRFlo, IRTrue) which determines the light spectrum used to capture imaging. The IRLightSurgN is connected to the SmartSurgN Laparoscope using a commercially available fiber optic light cable. Additionally, the SSN Laparoscope connects to the EyeRSurgN Camera Head. Images are acquired by the EyeRSurgN Camera Head and transmitted to the EyeRSurgN Console. Images are processed by the EyeRSurgN Console and outputted to a medical grade monitor. The SmartSurgN Visualization System can be used with any medical grade monitor with a HDMI or 3G-SDI input connection. The SmartSurgN Visualization System is intended to be used in conjunction with commercially available indocyanine green imaging (ICG) kits. ICG is a tricarbocyanine dye which fluoresces after excitation under near infrared light at 806 nm, permitting visualization of anatomical structures.
The provided document does not contain the detailed information required to fill out all the requested sections regarding acceptance criteria and a study proving device performance, especially in the context of an AI-based device and human reader improvement.
This document is a 510(k) summary for the SmartSurgN Visualization System, which is an endoscopic imaging system. It focuses on demonstrating substantial equivalence to predicate devices, not on evaluating an AI algorithm's performance or its impact on human readers.
Specifically, the document states: "Clinical testing was not required to demonstrate substantial equivalence to the predicate devices." This directly indicates that the type of clinical or performance study involving human readers and detailed performance metrics as requested (e.g., acceptance criteria table, MRMC study, ground truth establishment) was not conducted or reported in this 510(k) submission.
While the document mentions "Non-Clinical Performance Data" including a "benchmark study with the predicate devices to assess endoscopic video imaging in visible and near-infrared conditions" and assessment in "an animal model for simulated surgical environment feedback from clinicians," these are general performance tests and not a detailed clinical study demonstrating AI-assisted performance or human reader improvement.
Therefore, most of the requested fields cannot be answered based on the provided text.
Here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
- Cannot provide. The document does not specify quantitative acceptance criteria or detailed performance results in the format of a table for an AI algorithm. It mentions "performance" and "benchmark study" but no specific metrics or targets met.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Cannot provide. No information about test set sample size or data provenance is given for a human-in-the-loop or AI performance study. "Clinical testing was not required." The "animal model" assessment is mentioned, but details are not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Cannot provide. Since no clinical study or test set requiring ground truth for an AI algorithm or human reader performance is detailed, there's no information on experts or their qualifications for establishing ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Cannot provide. No information on adjudication methods is present.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The document explicitly states: "Clinical testing was not required to demonstrate substantial equivalence to the predicate devices." Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or reported here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Cannot determine. While the device itself is a "Visualization System" and not explicitly an "AI algorithm," its performance characteristics are assessed. The "benchmark study with the predicate devices to assess endoscopic video imaging" suggests standalone imaging performance was tested, but not necessarily an AI algorithm's specific performance. The document does not describe an AI algorithm or its standalone performance metrics.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Cannot provide. No ground truth establishment is described for a performance study of an AI algorithm.
8. The sample size for the training set
- Cannot provide. The document does not describe the development or training of an AI algorithm, or any associated training set.
9. How the ground truth for the training set was established
- Cannot provide. As no AI training set is mentioned, there's no information on how its ground truth might have been established.
Ask a specific question about this device
Page 1 of 1