Search Results
Found 1 results
510(k) Data Aggregation
(267 days)
Video Endoscopy System, 3D Video Endoscopy System
The Video Endoscopy System & 3D Video Endoscopy System are intended to be used to provide illumination and visualization of surgical field in a wide variety of diagnostic abdominal and thoracic minimally invasive procedures, including female reproductive organs (gynecology) and urological anatomy.
The proposed Video Endoscopy Systems include the Video Endoscopy System and 3D Video Endoscopy System, Video Endoscopy System supports 2D imaqe output, 3D Video Endoscopy System supports 2D/3D image output.
Video Endoscopy System is composed of Video Endoscope (LPS21000/LPS21030) and Video Endoscopy processor (EVS100).
The Video Endoscopy Processor is a video processor, which receives the electrical signals from the Video Endoscope and process it and output the final image to the monitor. Two models 2D Video Endoscope (LPS21000/LPS21030) are available, one image sensor is located at the distal of the endoscope (LPS21000/LPS21030), they are used in conjunction with Video Endoscopy Processor (EVS100) to output 2D images.
3D Video Endoscopy System is composed of Video Endoscope (LPS31000/LPS31030) and Video Endoscopy Processor (EVS200).
The Video Endoscopy Processor is a video processor, which receives the electrical signals from the Video Endoscope and process it and output the final image to the monitor. Two models 3D Video Endoscope (LPS31000/LPS31030) are available, two image sensors are located at the distal of the endoscope (LPS31000/LPS31030), they are used in conjunction with Video Endoscopy Processor (EVS200) to output 2D/3D images.
Video Endoscopy processor is non-sterile device. The Video Endoscope is terminallysterilized device. The Video Endoscope must be sterilized by users before being used in surgery.
This submission describes the Surgnova Healthcare Technologies (Zhejiang) Co., Ltd. Video Endoscopy System & 3D Video Endoscopy System (K210116).
Here's an analysis of the acceptance criteria and study information provided:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria for optical performance. Instead, it states that "The device met all acceptance criteria for optical performance testing and was shown to have equivalent image quality to the predicate." This implies a comparative standard rather than absolute numerical targets.
The reported device performance is qualitative for equivalence.
Performance Characteristic | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Resolution | Equivalent to Predicate (1920x1080 HD) | 1920x1080 HD (Same as predicate) |
Brightness | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
White Balance | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
3D-2D Mode | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Color Performance | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Field of View | Equivalent to Predicate (90°) | 90° (Same as predicate) |
Geometric Distortion | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Signal-to-Noise Ratio (SNR) | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Dynamic Range | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Image Intensity Uniformity (IIU) | Met acceptance criteria (not specified quantitatively) | Met acceptance criteria (shown to have equivalent image quality to predicate) |
Photobiological Safety | Compliance with IEC 62471:2006 | In compliance with IEC 62471:2006 |
Biocompatibility | Not toxic, irritating, or sensitizing (ISO 10993 standards) | Not toxic, irritating or sensitizing |
Software Validation | Compliance with FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices | In compliance with FDA Guidance |
Sterilization Validation | Compliance with ISO 17665-1 | According to ISO 17665-1 |
Package Verification | Compliance with ISTA 2A-11 & ASTM D 4169-16 | According to ISTA 2A-11 & ASTM D 4169-16 |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide details on sample sizes for the "test set" in terms of images or data points for the individual optical performance tests. The testing described is non-clinical bench testing rather than a study involving clinical data from patients. Therefore, terms like "country of origin of the data," "retrospective or prospective," or "test set" in the context of patient data do not apply here. The testing was performed on the device prototypes or production units.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The performance evaluation was based on bench testing against established standards and comparison to a predicate device's specifications, not on human expert review of clinical images to establish ground truth such as disease presence/absence.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. There was no adjudication method used, as the testing involved objective measurements and comparisons against technical specifications and standards, not subjective expert reviews requiring consensus.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. This submission is for a video endoscopy system as a medical device for visualization, not an AI-powered diagnostic or assistive tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, the primary evaluation was standalone performance simulation/bench testing of the device's optical and functional characteristics. The device itself (the video endoscopy system) is essentially the "algorithm only" in this context, as its performance is independently measured without human operators for the technical specifications listed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The "ground truth" for the non-clinical testing was based on:
- Established engineering and performance standards: e.g., IEC 60601 series, ISO 10993 series, IEC 62471.
- Predicate device specifications: The comparison table explicitly states characteristics like resolution, field angle, and depth of field that are identical or comparable to the predicate.
- Objective measurements: The various optical performance tests (Resolution, Brightness, White Balance, Color Performance, etc.) would have involved objective measurement techniques with specific targets or ranges derived from industry standards or performance requirements for such devices.
8. The sample size for the training set
Not applicable. This device is a video endoscopy system, not an AI or machine learning algorithm that requires a training set of data.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of medical device submission.
Ask a specific question about this device
Page 1 of 1