K Number
K231143
Date Cleared
2023-05-19

(28 days)

Product Code
Regulation Number
876.1520
Panel
GU
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The GI Genius System is a computer-assisted reading tool designed to aid endoscopists in detecting colonic mucosal lesions (such as polyps and adenomas) in real time during standard white-light endoscopy examinations of patients undergoing screening and surveillance endoscopic mucosal evaluations. The GI Genius computer-assisted detection device is limited for use with standard white-light endoscopy imaging only. This device is not intended to replace clinical decision making.

Device Description

GI Genius is an artificial intelligence-based device that has been trained to process colonoscopy images containing regions consistent with colorectal lesions like polyps, including those with flat (non-polypoid) morphology.

GI Genius is compatible with Video Processors featuring SDI (SMPTE 259M) or HD-SDI (SMPTE 292M) output ports and endoscopic display monitor featuring SDI (SMPTE 259M) or HD-SDI (SMPTE 292M) input ports.

GI Genius is connected between the video processor and the endoscopic display monitor. When first switched on, the endoscopic field of view is clearly identified by four corner markers, and a blinking green square indicator appears on the connected endoscopic display monitor to state that the system is ready to function.

During live video streaming of the endoscopic video image, GI Genius generates a video output on the endoscopic display monitor that contains the original live video together with superimposed green square markers that will appear when a polyp or other lesion of interest is detected, accompanied by a short sound. These markers will not be visible when no lesion detection occurs.

The operating principle of the subject device is identical to that of the predicate device, this being a computer-assisted detection device used in conjunction with endoscopy for the detection of abnormal lesions in the gastrointestinal tract. This device with advanced software algorithms brings attention to images to aid in the detection of lesions. The device includes hardware to support interfacing with video endoscopy systems.

AI/ML Overview

The provided text describes the acceptance criteria and the study that proves the GI Genius System (System 100 and System 200) meets these criteria, specifically in the context of its substantial equivalence to a previously cleared device.

1. Table of Acceptance Criteria and Reported Device Performance

The relevant performance metrics are presented as a comparison between the subject device (GI Genius System 100/200) and its predicate device. Since the claim is substantial equivalence, the performance of the subject device is shown to be at least as good as the predicate.

CharacteristicAcceptance Criteria (Predicate Performance)Reported Device Performance (Subject Device)Comparison
Lesion-based sensitivity86.5 %86.5 %Same
Frame Level Performance (150 videos / 338 polyps)
True positive269,223269,223Same
True negative5,239,1285,239,128Same
False positive104,669104,669Same
False negative192,567192,567Same
True positive rate per frame (Mean)58.30 %58.30 %Same
True positive rate per frame (% of polyps)100 %100 %Same
False positive rate per frame (Mean)1.96 %1.96 %Same
Frame-based TPr/FPr ROC curve, AOC0.7960.796Same
False positive clusters per patient ( 500 ms)1 more than baseline1 more than baselineSame
Video delay, signal in to signal out (GI Genius System 200)1.52 µs (Predicate)0.74 µsImproved
Video delay, signal in to signal out (GI Genius System 100)1.52 µs (Predicate)1.52 µsSame

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size: The performance data is based on 150 videos containing 338 polyps.
  • Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. However, it references the "baseline clinical validation for the subject device was conducted and reviewed in DEN200055," suggesting the source of this test data comes from a prior, perhaps larger, study. The testing for the current submission involved repeatable performance tests and non-inferiority testing on "42 pre-recorded procedures."

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

The document does not provide details on the number or qualifications of experts used to establish the ground truth for the test set. It mentions "per-frame assessment," which implies a detailed, possibly manual, review process.

4. Adjudication Method for the Test Set

The document does not explicitly state an adjudication method (e.g., 2+1, 3+1).

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study to assess how human readers improve with AI vs. without AI assistance. The study focuses on the device's standalone performance and its non-inferiority (or equivalence) to a predicate device, rather than human-AI collaboration.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, a standalone performance study was done. The document states:

  • "Tests according to the Standalone Performance Testing Protocol v2.0, submitted as part of the K211951 predicate device submission, have been repeated for the applicable parts of the subject device."
  • The performance metrics provided in the table (True Positive, True Negative, False Positive, False Negative, ROC curve AOC, per-frame rates, and false positive clusters) are indicative of standalone algorithm performance.
  • "Non-inferiority of performance of GI Genius with the Olympus CV-1500 EVIS X1 UHD video processor has been established by means of a per-frame assessment on 42 pre-recorded procedures." This further supports standalone assessment.

7. The Type of Ground Truth Used

The ground truth appears to be based on expert consensus or a similar form of expert review, as evidenced by the "per-frame assessment" and lesion-level evaluation related to polyps and adenomas, which are typically confirmed by medical professionals. Pathology or outcomes data are not explicitly mentioned as the primary ground truth.

8. The Sample Size for the Training Set

The document does not explicitly state the sample size for the training set. It mentions that "GI Genius is an artificial intelligence-based device that has been trained to process colonoscopy images." The performance evaluation focuses on the test set, not the training set details.

9. How the Ground Truth for the Training Set Was Established

The document does not provide details on how the ground truth for the training set was established. It only states that the device "has been trained to process colonoscopy images containing regions consistent with colorectal lesions like polyps."

§ 876.1520 Gastrointestinal lesion software detection system.

(a)
Identification. A gastrointestinal lesion software detection system is a computer-assisted detection device used in conjunction with endoscopy for the detection of abnormal lesions in the gastrointestinal tract. This device with advanced software algorithms brings attention to images to aid in the detection of lesions. The device may contain hardware to support interfacing with an endoscope.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use, including detection of gastrointestinal lesions and evaluation of all adverse events.
(2) Non-clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use. Testing must include:
(i) Standalone algorithm performance testing;
(ii) Pixel-level comparison of degradation of image quality due to the device;
(iii) Assessment of video delay due to marker annotation; and
(iv) Assessment of real-time endoscopic video delay due to the device.
(3) Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device.
(4) Performance data must demonstrate electromagnetic compatibility and electrical safety, mechanical safety, and thermal safety testing for any hardware components of the device.
(5) Software verification, validation, and hazard analysis must be provided. Software description must include a detailed, technical description including the impact of any software and hardware on the device's functions, the associated capabilities and limitations of each part, the associated inputs and outputs, mapping of the software architecture, and a description of the video signal pipeline.
(6) Labeling must include:
(i) Instructions for use, including a detailed description of the device and compatibility information;
(ii) Warnings to avoid overreliance on the device, that the device is not intended to be used for diagnosis or characterization of lesions, and that the device does not replace clinical decision making;
(iii) A summary of the clinical performance testing conducted with the device, including detailed definitions of the study endpoints and statistical confidence intervals; and
(iv) A summary of the standalone performance testing and associated statistical analysis.