Search Results
Found 1 results
510(k) Data Aggregation
(224 days)
VISERA CYSTO-NEPHRO VIDEOSCOPEs OLYMPUS CYF TYPE V2, CYF TYPE V2R have been designed to be used with an Olympus video system center, light source, documentation equipment, monitor, EndoTherapy accessories, and other ancillary equipment for endoscopic diagnosis and treatment within the bladder, urethra, and kidney.
CYSTO-NEPHRO VIDEOSCOPEs OLYMPUS CYF VH, and CYF VHR have been designed to be used with an Olympus video system center, light source, documentation equipment, monitor, EndoTherapy accessories, and other ancillary equipment for endoscopic diagnosis and treatment within the bladder, urethra, and kidney.
The subject device has been designed to be used with an Olympus video system center, light source, documentation equipment, monitor, EndoTherapy accessories, and other ancillary equipment for endoscopic diagnosis and treatment within the bladder, urethra, and kidney. The subject devices consist of three parts: the control section section, and the connector section. The basic principle including user interface and operation for the subject device are identical to that of the predicate device.
This FDA 510(k) Summary describes a submission for a new line of endoscopes (VISERA CYSTO-NEPHRO VIDEOSCOPE OLYMPUS CYF TYPE V2, CYF TYPE V2R, CYF-VH, CYF-VHR). The submission asserts substantial equivalence to a predicate device (OLYMPUS CYF TYPE V2, VA2, V2R [K133538]).
The provided document describes a medical device, specifically a cysto-nephro videoscope used for endoscopic diagnosis and treatment within the bladder, urethra, and kidney. The information focuses on demonstrating substantial equivalence to a predicate device, rather than proving performance against specific acceptance criteria for an AI model. Therefore, many of the typical acceptance criteria and study details for an AI-powered device are not applicable or not present in this document.
However, I can extract the relevant information regarding performance data that was provided in support of the substantial equivalence determination for this non-AI medical device.
Key takeaway: This document explicitly states that "No clinical study was performed to demonstrate substantial equivalence" and "No animal study was performed to demonstrate substantial equivalence." The performance data focused on bench testing, reprocessing validation, biocompatibility, software verification, electrical safety, and risk management to demonstrate that the new device does not raise new safety or effectiveness concerns compared to the predicate device.
Here's a breakdown of the requested information based on the provided document:
-
A table of acceptance criteria and the reported device performance
Since this is a substantial equivalence submission for a traditional medical device (endoscope) and not an AI device, the "acceptance criteria" are not framed in terms of AI metrics (e.g., sensitivity, specificity, AUC). Instead, they relate to functional performance and safety validations.
Acceptance Criteria Type Reported Device Performance Reprocessing Validation Conducted and documented as recommended by FDA guidance ("Reprocessing Medical Devices in Health Care Setting: Validation Methods and Labeling"). Biocompatibility Testing Conducted in accordance with FDA guidance (ISO 10993-1). Included Cytotoxicity, ISO Intracutaneous (Rabbits), ISO Guinea Pig Maximization Sensitization, USP Rabbit Pyrogen, and ISO Acute Systemic Toxicity (Mice). Software Verification & Validation Conducted and documented as recommended by FDA guidance ("Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" and "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices"). Electrical Safety & EMC Conducted in accordance with ANSI/AAMI ES 60601-1:2005/(R)2012 and A1:2012 (safety) and IEC 60601-2-18:2009 (safety) and IEC 60601-1-2:2014 (EMC). Performance Testing - Bench Conducted to ensure the device performs as intended and meets design specifications. Specific tests included: Thermal Safety, Composite Durability, Color Performance, Photobiological Safety, Noise and Dynamic Range, Image Intensity Uniformity, Field of View and Direction of View, Resolution, Effectiveness NBI. Risk Management Performed in accordance with ISO 14971:2007. Design verification tests and their acceptance criteria were identified as a result of this process. -
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not detail specific "test set" sample sizes in the context of an AI model. The performance data listed (e.g., reprocessing, biocompatibility, electrical safety, bench testing) are typically conducted on a representative number of device units or samples of materials, not on a "test set" of patient data as would be for an AI algorithm. No country of origin for such test data or whether it was retrospective/prospective is mentioned, as these are engineering/materials tests.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. This is not an AI device requiring expert-labeled ground truth from medical images or patient data. The "ground truth" for the tests performed (e.g., biocompatibility, electrical safety) would be established by validated scientific methods and industry standards, not by medical experts interpreting data.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as there is no "test set" in the context of AI performance evaluation.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The device is an endoscope, not an AI-assisted diagnostic tool. No MRMC study was performed as there is no AI component or human-in-the-loop performance to evaluate. The document explicitly states: "No clinical study was performed to demonstrate substantial equivalence."
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. This device is an endoscope, not an algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The concept of "ground truth" in the context of AI algorithms (e.g., expert consensus for diagnoses) is not applicable here. For the engineering and safety tests performed:
- Reprocessing: Ground truth is defined by successful adherence to validated reprocessing protocols and sterility Assurance Levels (SAL).
- Biocompatibility: Ground truth is established by standardized biological tests (e.g., ISO 10993) demonstrating no an adverse biological reaction.
- Electrical Safety & EMC: Ground truth is compliance with relevant electrical safety and electromagnetic compatibility standards (e.g., IEC 60601 series).
- Bench Testing: Ground truth is defined by meeting pre-specified design requirements and engineering specifications (e.g., resolution targets, field of view).
-
The sample size for the training set
Not applicable. This device is not an AI model, so there is no training set.
-
How the ground truth for the training set was established
Not applicable.
Ask a specific question about this device
Page 1 of 1