Search Results
Found 1 results
510(k) Data Aggregation
(29 days)
VISERA RHINLARYNGOVIDEOSCOPE OLYMPUS ENF TYPE V
This instrument has been designed to be used with an Olympus video system center, light source, documentation equipment, display monitor, endo-therapy accessories, and other ancillary equipment for endoscopic diagnosis and treatment within the nasal and nasapharyngeal lumens.
The subject device is used for endoscopic diagnosis and treatment within the nasal and nasapharyngeal lumens. The optical system is modified from the image guide to CCD and the resolution is improved.
The provided document is a 510(k) summary for the Olympus ENF Type V rhinolaryngovideoscope. It focuses on establishing substantial equivalence to a predicate device rather than presenting a performance study with detailed acceptance criteria and standalone algorithm performance as would be expected for a device incorporating AI or complex computational analysis. Therefore, much of the requested information regarding AI-specific criteria and studies cannot be extracted from this document, as the device is a videoscope and not an AI-driven diagnostic system.
Here's the information that can be extracted and a clear indication of what is not present:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" in the traditional sense of performance targets for a scientific study (e.g., sensitivity, specificity thresholds). Instead, it provides a comparative table of specifications between the subject device (ENF-V) and the predicate device (XENF-TP) to demonstrate substantial equivalence. The "acceptance" in this context is that the subject device meets or exceeds the relevant specifications of the predicate device.
Specifications | Acceptance Criteria (Predicate Device XENF-TP) | Reported Device Performance (Subject Device ENF-V) |
---|---|---|
Field of view | 85° | 90° |
Direction of view | 0° (Forward) | 0° (Forward) |
Depth of field | 3-50 mm | 5-50 mm |
Insertion tube working length | 365mm | 365mm |
Insertion tube outer diameter | $\phi$ 5.0mm | $\phi$ 3.9mm |
Bending section angulation range | Up 130°, Down130° | Up 130°, Down130° |
Total length | 620mm | 593mm |
Optical system | Image guide | CCD |
Resolution (Max, Minimum) | 5.62 Lines/mm | |
0.70 Lines/mm | 12.6 Lines/mm | |
12.6 Lines/mm |
Summary of improved specifications for the subject device (ENF-V):
- Wider field of view (90° vs 85°)
- Shorter total length (593mm vs 620mm)
- Improved optical system (CCD vs Image guide)
- Significantly improved resolution (12.6 Lines/mm vs 5.62 Lines/mm Max and 0.70 Lines/mm Min)
- Smaller insertion tube outer diameter ($\phi$ 3.9mm vs $\phi$ 5.0mm)
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The submission focuses on device specifications and substantial equivalence, not a clinical study with a defined test set of patients or medical data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. As there is no clinical study with a test set of data, there is no mention of experts establishing ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. As there is no clinical study with a test set of data, there is no mention of an adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is a videoscope with an improved optical system, not an AI-assisted device. Therefore, an MRMC study related to AI assistance was not applicable and not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not an AI algorithm. Therefore, a standalone performance evaluation of an algorithm was not applicable and not performed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided in the document. The device itself is an imaging tool; the "ground truth" for its performance is related to its physical and optical specifications, not diagnostic labels derived from expert consensus, pathology, or outcomes data in a clinical study context for a diagnostic algorithm.
8. The sample size for the training set
This information is not provided in the document. This is not an AI-driven device, so there is no concept of a "training set" for an algorithm.
9. How the ground truth for the training set was established
This information is not provided in the document. As there is no training set for an algorithm, the establishment of its ground truth is not applicable.
Ask a specific question about this device
Page 1 of 1