Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN230027
    Device Name
    NaviCam ProScan
    Date Cleared
    2023-12-12

    (242 days)

    Product Code
    Regulation Number
    876.1540
    Type
    Direct
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    NaviCam ProScan

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NaviCam ProScan is an artificial intelligence (AI) assisted reading tool designed to aid small bowel capsule endoscopy reviewers in decreasing the time to review capsule endoscopy images for adult patients in whom the capsule endoscopy images were obtained for suspected small bowel bleeding. The clinician is responsible for conducting their own assessment of the findings of the AI-assisted reading through review of the entire video, as clinically appropriate. ProScan also assists small bowel capsule endoscopy reviewers in identifying the digestive tract location (oral cavity and beyond, esophagus, stomach, small bowel) of the image in adults. This tool is not intended to replace clinical decision making.

    Device Description

    The NaviCam ProScan is artificial intelligence software that has been trained to process capsule endoscopy images of the small bowel acquired by the NaviCam Small Bowel Capsule Endoscopy System to recognize the various sections of the digestive tract and to recognize and mark images containing suspected abnormal lesions.

    NaviCam ProScan is intended to be used as an adjunct to the ESView software of the NaviCam Small Bowel Capsule Endoscopy System (both cleared in K221590) and is not intended to replace gastroenterologist assessment or histopathological sampling.

    NaviCam ProScan does not make any modification or alteration to the original capsule endoscopy video. It only overlays graphical markers and includes an option to only display these identified images. The whole small bowel capsule endoscopy video and highlighted regions still must be independently assessed by the clinician and appropriate actions taken according to standard clinical practice.

    The NaviCam ProScan software includes two main algorithms, as illustrated in Figure 1 below:

    • Digestive tract site recognition, which includes an image analysis algorithm and site segmentation algorithm to determine: oral and beyond, esophagus, stomach, and small bowel. Tract site is displayed as a color code on the video timeline with descriptions on the indicators at the bottom of the software user interface.
    • Small bowel lesion recognition, which includes the small bowel lesion image analysis algorithm with lesion region localization. Potential lesions are marked with a bounding box as illustrated in Figure 2 below, with the active video played at the top section of the figure, and ProScan-identified images in the lower section, which includes images with suspected lesions and individual images marking the transition in the digestive tract. The algorithm is functional only on those sections of the GI tract that were identified as "small bowel" by the digestive tract site recognition software function.
    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the studies proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Lesion Detection - Standalone Algorithm Performance (Image-Level)

    Acceptance CriteriaReported Device Performance
    Sensitivity95.05% (95% CI: 94.28%-95.72%)
    Specificity97.54% (95% CI: 97.28%-97.78%)
    AUC0.993 (95% CI: 0.981 to 1.000)

    Tract Site Recognition - Standalone Algorithm Performance (Image-Level)

    Acceptance CriteriaSensitivity (95% CI)Specificity (95% CI)
    Oral cavity and beyond99.47% (99.14%-99.68%)99.50% (99.39%-99.58%)
    Esophagus98.92% (97.79%-99.50%)99.10% (98.98%-99.22%)
    Stomach99.60% (99.49%-98.69%)99.06% (98.80%-99.26%)
    Small Bowel99.26% (98.89%-99.51%)98.36% (98.18%-98.52%)

    Clinical Performance (AI+Physician vs. Standard Reading)

    Acceptance CriteriaReported Device Performance (AI+Physician)Reported Device Performance (Standard Reading)
    Diagnostic Yield73.7% (95% CI: 65.3%-80.9%)62.4% (95% CI: 53.6%-70.7%)
    Reading Time3 minutes 50 seconds (±3 minutes 20 seconds)33 minutes 42 seconds (±22 minutes 51 seconds)
    Non-inferiorityDemonstrated non-inferiority to expert board reading, and superior to standard reading for diagnostic yield.-
    False Negatives7 (compared to expert board)22 (compared to expert board)
    False Positives0 (after physician review)0 (after physician review)

    Study Details and Provenance

    2. Sample Sizes and Data Provenance

    Standalone Algorithm Testing (Lesion Detection)

    • Test Set Sample Size: 218 patients
    • Data Provenance: Obtained from 8 clinical institutions in China. The study was retrospective.

    Standalone Algorithm Testing (Tract Site Recognition)

    • Test Set Sample Size: 424 patients
    • Data Provenance: Obtained from 8 clinical institutions in China. The study was retrospective.

    Clinical Study (ARTIC Study)

    • Test Set Sample Size: 133 patients (from an initial enrollment of 137).
    • Data Provenance: Patients enrolled prospectively from 7 European centers (Italy, France, Germany, Hungary, Spain, Sweden, and UK) from February 2021 to January 2022.

    3. Number of Experts and Qualifications for Ground Truth

    Standalone Algorithm Testing (Lesion Detection & Tract Site Recognition)

    • Number of Experts: Initially three gastroenterologists for pre-annotation, followed by two arbitration experts for review and modification. A total of five experts were involved in establishing the ground truth when including the arbitration experts.
    • Qualifications: "Gastroenterologists" are explicitly stated. No specific experience level (e.g., years of experience) is provided for these experts in the available text.

    Clinical Study (ARTIC Study)

    • Number of Experts: An expert board consisting of 5 of the original 22 clinician readers was used to establish ground truth.
    • Qualifications: The original 22 clinician readers "had capsule endoscopy experience of over 500 readings." It can be inferred that the 5 experts on the expert board had similar or higher qualifications.

    4. Adjudication Method

    Standalone Algorithm Testing (Lesion Detection & Tract Site Recognition)

    • Method: Initial annotations by three gastroenterologists. "The computer automatically determines consistency and merges the classification results while preserving differing opinions." If consistency was less than a cutoff value (specifically "less than 3" for lesion detection, implying inconsistency among the 3 initial annotators), two arbitration experts independently review and modify the results. In difficult cases, "collective discussion and confirmation" were conducted by the adjudication experts. This aligns with a 3+2 adjudication model or a similar consensus-based approach with arbitration.

    Clinical Study (ARTIC Study)

    • Method: An expert board was used to "adjudicate the findings in case of disagreement" between standard readings and AI+Physician readings. Discordant cases were "re-evaluated and eventually reclassified during the adjudication phase." This suggests a consensus-based adjudication by the expert board. The exact protocol (e.g., how disagreements within the expert board were resolved) is not explicitly detailed, but it functions as the final ground truth determination.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was conducted (the ARTIC study).
    • Effect Size of Human Readers' Improvement with AI vs. without AI Assistance:
      • Diagnostic Yield: AI-assisted reading (AI+Physician) achieved a diagnostic yield of 73.7% compared to 62.4% for standard reading (without AI), showing an absolute improvement of 11.3 percentage points. This improvement was statistically significant (p=0.015).
      • Reading Time: Mean reading time with AI assistance was 3 minutes 50 seconds, significantly faster than 33 minutes 42 seconds for standard reading. This represents a reduction of approximately 88.5% in reading time.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    • Yes, standalone performance was done for both the lesion detection function and the tract site recognition function.

      • Lesion Detection (Standalone):
        • Patient-level sensitivity: 98%
        • Patient-level specificity: 37%
        • Image-level sensitivity: 95.05%
        • Image-level specificity: 97.54%
      • Tract Site Recognition (Standalone):
        • Sensitivity and specificity values for each anatomical site were all above 98%.
    • Important Caveat: The regulatory information states, "In the clinical study of the device, performance (sensitivity and specificity) of the device in the absence of clinician input was not evaluated. Therefore, the AI standalone performance in the clinical study of NaviCam ProScan has not been established." This highlights a distinction between the "standalone algorithm testing" reported in detail and the performance within the clinical use context (i.e., the AI output before a clinician potentially overrides it). The clinical study, ARTIC, primarily evaluates "AI+Physician" performance. The document explicitly notes that the number of false positive predictions from the AI software (in the absence of physician input) in the ARTIC study is unknown.

    7. Type of Ground Truth Used

    • Standalone Algorithm Testing: Expert consensus (multiple gastroenterologists with arbitration) on individual images and patient cases.
    • Clinical Study (ARTIC Study): Expert board reading and adjudication (5 experienced readers) of videos. This essentially serves as an expert consensus ground truth for the clinical effectiveness study.

    8. Sample Size for the Training Set

    Lesion Detection Function:

    • Training Set Sample Size: 1,476 patients (from a dataset of 2,642 patients).

    Tract Site Recognition Function:

    • Training Set Sample Size: 1,386 patients (from a dataset of 2,642 patients).

    9. How Ground Truth for the Training Set Was Established

    The ground truth for the training set was established using a multi-expert annotation process:

    Lesion Detection Function:

    • Pre-Annotation: Full videos were randomly assigned to three gastroenterologists who annotated positive and negative lesion image segments.
    • Annotation (Truthing): The sampled image dataset was annotated by the same three gastroenterologists using software. The computer checked for consistency and merged results. For inconsistencies (cutoff value
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1