K Number
K211326
Device Name
EndoScreener
Date Cleared
2021-11-19

(203 days)

Product Code
Regulation Number
876.1520
Reference & Predicate Devices
N/A
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

EndoScreener is intended as a stand-alone software for real-time automatic detection of polyps in colonoscopy video stream during the procedure.

Physicians are responsible for reviewing the identified areas of suspect polyps presented by EndoScreener and confirming the presence or absence of a polyp on the evaluation of the colonoscopy image on their own medical judgment. EndoScreener is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make or confirm a diagnosis.

EndoScreener is indicated for use by licensed endoscopists who perform colonoscopy in adults. EndoScreener is indicated for use with white light colonoscopy.

Device Description

The EndoScreener is a computer-assisted detection device for colorectal polyps. EndoScreener takes as input colonoscopy video stream from an endoscopy device, which is analyzed in real-time. The device output consists of blue boxes overlaid onto the colonoscopy images to highlight regions of potential polyp. EndoScreener also has the option to sound an alert to the physicians who perform the colonoscopy when a polyp has been detected. Following detection by EndoScreener, the physician must confirm the EndoScreener findings based on his/her own medical judgment.

AI/ML Overview

Based on the provided text, the EndoScreener device's acceptance criteria and the study proving it meets these criteria are described. However, the text does not provide a table of acceptance criteria with reported device performance values. It only states that "acceptable performance was obtained" and "the polyp detection accuracy observed was as expected."

Here's an attempt to extract the available information and structure it according to your request, with noted limitations due to the sparse details in the provided document:


Acceptance Criteria and Device Performance Study for EndoScreener

The EndoScreener is a computer-assisted detection device intended for real-time automatic detection of polyps in colonoscopy video streams. The performance of the device was evaluated through validation studies and a multi-center randomized controlled trial.

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance Criteria CategorySpecific MetricAcceptance Threshold (Implicit)Reported Device Performance
Per-Image PerformanceSensitivity(Not specified, but "acceptable")"acceptable performance was obtained"
Specificity(Not specified, but "acceptable")"acceptable performance was obtained"
Per-Polyp PerformanceSensitivity(Not specified, but "acceptable")"acceptable performance was obtained"
AUC (Area Under Curve)(Not specified, but "acceptable")"acceptable performance was obtained"
Clinical Efficacy (RCT)Adenoma Miss Rate (AMR)Significantly lower than controlSignificantly lower in CADe-first group
Adenoma per Colonoscopy (APC)Higher than controlHigher in CADe-first group
Technical PerformanceImaging DegradationNone"no imaging degradation"
End-to-end LatencyIgnorable"ignorable end-to-end latency"
FunctionalityAs intended"functioned as intended"
Polyp Detection AccuracyAs expected"polyp detection accuracy observed was as expected"

Note: The document only provides qualitative statements regarding "acceptable" or "as expected" performance for the per-image and per-polyp metrics, and does not list the specific quantitative acceptance thresholds or the actual numerical performance values achieved on these metrics.

2. Sample Sizes and Data Provenance

  • Test Set (Validation Datasets):
    • Sample Size: 1,138 consecutive polyp patients (for histology-confirmed dataset). The document mentions "multiple datasets" but only specifies the size for one of them.
    • Data Provenance: Not explicitly stated for the validation datasets regarding country of origin or whether it was retrospective/prospective. It's implied these were pre-existing datasets used for algorithm validation.
  • Test Set (Clinical Trial):
    • Sample Size: 223 patients.
    • Data Provenance: Multi-center randomized controlled trial performed at four United States academic medical centers. The study design (back-to-back colonoscopy procedures with randomization to CADe-routine and Routine-CADe groups) indicates it was a prospective study.

3. Number of Experts and Qualifications for Ground Truth (Test Set)

  • Validation Datasets: The document states the dataset for evaluating per-image and per-polyp performance used "histology confirmation." This suggests that the ground truth for polyps specifically was established by pathology/histology, not by a panel of medical experts. It does not mention experts used for image-level ground truth.
  • Clinical Trial: The ground truth for the clinical trial would be the confirmed polyp findings from the colonoscopy procedures. The primary endpoint, adenoma miss rate, implies that polyps were confirmed (likely by histology after removal). The document does not specify the number of endoscopists or their specific qualifications for establishing ground truth in terms of polyp presence/absence or adenoma identification within the study. However, it states the device is for "licensed endoscopists who perform colonoscopy in adults," implying these are the experts conducting the procedures.

4. Adjudication Method for the Test Set

  • Validation Datasets: For the 1,138 polyp patients, ground truth was established by "histology confirmation." This indicates that post-procedure pathological analysis of excised tissue was the definitive method for polyp presence. This doesn't typically involve a multi-reader visual adjudication process.
  • Clinical Trial: The study design (tandem colonoscopy, randomized controlled trial) focused on actual clinical outcomes (adenoma miss rate, adenoma per colonoscopy). Ground truth would derive from the findings of the colonoscopy procedure itself, including subsequent histology for removed polyps. There is no mention of a separate expert adjudication panel for the images/videos from the clinical trial data.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Yes, a form of comparative effectiveness study was done, but not explicitly described as a traditional MRMC study. The document describes a "multi-center, tandem colonoscopy, randomized controlled trial."
  • Effect Size of Human Readers Improvement with AI vs. without AI Assistance:
    • The primary endpoint, adenoma miss rate (AMR), was "significantly lower in CADe-first group" compared to the Routine-CADe group. This directly indicates an improvement in detection when AI was used first.
    • The 1st pass adenoma per colonoscopy (APC) was "higher in the CADe-first group." This also indicates an improvement in the number of adenomas detected by the endoscopist when aided by the AI.
    • The document states these were "significant" improvements, but does not provide the numerical effect sizes (e.g., specific percentage reduction in AMR, or specific increase in APC per colonoscopy).

6. Standalone (Algorithm Only) Performance Study

  • Yes. The nonclinical testing section explicitly states that "performance was evaluated on a dataset of 1,138 consecutive polyp patients with histology confirmation and acceptable performance was obtained" for "per-image sensitivity and specificity as well as per-polyp sensitivity and AUC." This describes the algorithm's standalone performance preceding the clinical trial.

7. Type of Ground Truth Used

  • For the standalone performance evaluation: Histology confirmation for polyps.
  • For the clinical trial: Clinical outcomes data (adenoma miss rate, adenoma per colonoscopy) based on colonoscopy findings and confirmed histology.

8. Sample Size for the Training Set

  • Not specified. The document mentions the device uses a "customized deep learning model" but provides no information about the size or characteristics of the data used to train this model.

9. How the Ground Truth for the Training Set was Established

  • Not specified. Given that it's a deep learning model, it would require annotated data for training, but the document does not describe the process by which this training data was annotated or its ground truth established.

§ 876.1520 Gastrointestinal lesion software detection system.

(a)
Identification. A gastrointestinal lesion software detection system is a computer-assisted detection device used in conjunction with endoscopy for the detection of abnormal lesions in the gastrointestinal tract. This device with advanced software algorithms brings attention to images to aid in the detection of lesions. The device may contain hardware to support interfacing with an endoscope.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use, including detection of gastrointestinal lesions and evaluation of all adverse events.
(2) Non-clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use. Testing must include:
(i) Standalone algorithm performance testing;
(ii) Pixel-level comparison of degradation of image quality due to the device;
(iii) Assessment of video delay due to marker annotation; and
(iv) Assessment of real-time endoscopic video delay due to the device.
(3) Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device.
(4) Performance data must demonstrate electromagnetic compatibility and electrical safety, mechanical safety, and thermal safety testing for any hardware components of the device.
(5) Software verification, validation, and hazard analysis must be provided. Software description must include a detailed, technical description including the impact of any software and hardware on the device's functions, the associated capabilities and limitations of each part, the associated inputs and outputs, mapping of the software architecture, and a description of the video signal pipeline.
(6) Labeling must include:
(i) Instructions for use, including a detailed description of the device and compatibility information;
(ii) Warnings to avoid overreliance on the device, that the device is not intended to be used for diagnosis or characterization of lesions, and that the device does not replace clinical decision making;
(iii) A summary of the clinical performance testing conducted with the device, including detailed definitions of the study endpoints and statistical confidence intervals; and
(iv) A summary of the standalone performance testing and associated statistical analysis.