Search Results
Found 2 results
510(k) Data Aggregation
(242 days)
Ankon Technologies Co., Ltd
NaviCam ProScan is an artificial intelligence (AI) assisted reading tool designed to aid small bowel capsule endoscopy reviewers in decreasing the time to review capsule endoscopy images for adult patients in whom the capsule endoscopy images were obtained for suspected small bowel bleeding. The clinician is responsible for conducting their own assessment of the findings of the AI-assisted reading through review of the entire video, as clinically appropriate. ProScan also assists small bowel capsule endoscopy reviewers in identifying the digestive tract location (oral cavity and beyond, esophagus, stomach, small bowel) of the image in adults. This tool is not intended to replace clinical decision making.
The NaviCam ProScan is artificial intelligence software that has been trained to process capsule endoscopy images of the small bowel acquired by the NaviCam Small Bowel Capsule Endoscopy System to recognize the various sections of the digestive tract and to recognize and mark images containing suspected abnormal lesions.
NaviCam ProScan is intended to be used as an adjunct to the ESView software of the NaviCam Small Bowel Capsule Endoscopy System (both cleared in K221590) and is not intended to replace gastroenterologist assessment or histopathological sampling.
NaviCam ProScan does not make any modification or alteration to the original capsule endoscopy video. It only overlays graphical markers and includes an option to only display these identified images. The whole small bowel capsule endoscopy video and highlighted regions still must be independently assessed by the clinician and appropriate actions taken according to standard clinical practice.
The NaviCam ProScan software includes two main algorithms, as illustrated in Figure 1 below:
- Digestive tract site recognition, which includes an image analysis algorithm and site segmentation algorithm to determine: oral and beyond, esophagus, stomach, and small bowel. Tract site is displayed as a color code on the video timeline with descriptions on the indicators at the bottom of the software user interface.
- Small bowel lesion recognition, which includes the small bowel lesion image analysis algorithm with lesion region localization. Potential lesions are marked with a bounding box as illustrated in Figure 2 below, with the active video played at the top section of the figure, and ProScan-identified images in the lower section, which includes images with suspected lesions and individual images marking the transition in the digestive tract. The algorithm is functional only on those sections of the GI tract that were identified as "small bowel" by the digestive tract site recognition software function.
Here's a detailed breakdown of the acceptance criteria and the studies proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Lesion Detection - Standalone Algorithm Performance (Image-Level)
Acceptance Criteria | Reported Device Performance |
---|---|
Sensitivity | 95.05% (95% CI: 94.28%-95.72%) |
Specificity | 97.54% (95% CI: 97.28%-97.78%) |
AUC | 0.993 (95% CI: 0.981 to 1.000) |
Tract Site Recognition - Standalone Algorithm Performance (Image-Level)
Acceptance Criteria | Sensitivity (95% CI) | Specificity (95% CI) |
---|---|---|
Oral cavity and beyond | 99.47% (99.14%-99.68%) | 99.50% (99.39%-99.58%) |
Esophagus | 98.92% (97.79%-99.50%) | 99.10% (98.98%-99.22%) |
Stomach | 99.60% (99.49%-98.69%) | 99.06% (98.80%-99.26%) |
Small Bowel | 99.26% (98.89%-99.51%) | 98.36% (98.18%-98.52%) |
Clinical Performance (AI+Physician vs. Standard Reading)
Acceptance Criteria | Reported Device Performance (AI+Physician) | Reported Device Performance (Standard Reading) |
---|---|---|
Diagnostic Yield | 73.7% (95% CI: 65.3%-80.9%) | 62.4% (95% CI: 53.6%-70.7%) |
Reading Time | 3 minutes 50 seconds (±3 minutes 20 seconds) | 33 minutes 42 seconds (±22 minutes 51 seconds) |
Non-inferiority | Demonstrated non-inferiority to expert board reading, and superior to standard reading for diagnostic yield. | - |
False Negatives | 7 (compared to expert board) | 22 (compared to expert board) |
False Positives | 0 (after physician review) | 0 (after physician review) |
Study Details and Provenance
2. Sample Sizes and Data Provenance
Standalone Algorithm Testing (Lesion Detection)
- Test Set Sample Size: 218 patients
- Data Provenance: Obtained from 8 clinical institutions in China. The study was retrospective.
Standalone Algorithm Testing (Tract Site Recognition)
- Test Set Sample Size: 424 patients
- Data Provenance: Obtained from 8 clinical institutions in China. The study was retrospective.
Clinical Study (ARTIC Study)
- Test Set Sample Size: 133 patients (from an initial enrollment of 137).
- Data Provenance: Patients enrolled prospectively from 7 European centers (Italy, France, Germany, Hungary, Spain, Sweden, and UK) from February 2021 to January 2022.
3. Number of Experts and Qualifications for Ground Truth
Standalone Algorithm Testing (Lesion Detection & Tract Site Recognition)
- Number of Experts: Initially three gastroenterologists for pre-annotation, followed by two arbitration experts for review and modification. A total of five experts were involved in establishing the ground truth when including the arbitration experts.
- Qualifications: "Gastroenterologists" are explicitly stated. No specific experience level (e.g., years of experience) is provided for these experts in the available text.
Clinical Study (ARTIC Study)
- Number of Experts: An expert board consisting of 5 of the original 22 clinician readers was used to establish ground truth.
- Qualifications: The original 22 clinician readers "had capsule endoscopy experience of over 500 readings." It can be inferred that the 5 experts on the expert board had similar or higher qualifications.
4. Adjudication Method
Standalone Algorithm Testing (Lesion Detection & Tract Site Recognition)
- Method: Initial annotations by three gastroenterologists. "The computer automatically determines consistency and merges the classification results while preserving differing opinions." If consistency was less than a cutoff value (specifically "less than 3" for lesion detection, implying inconsistency among the 3 initial annotators), two arbitration experts independently review and modify the results. In difficult cases, "collective discussion and confirmation" were conducted by the adjudication experts. This aligns with a 3+2 adjudication model or a similar consensus-based approach with arbitration.
Clinical Study (ARTIC Study)
- Method: An expert board was used to "adjudicate the findings in case of disagreement" between standard readings and AI+Physician readings. Discordant cases were "re-evaluated and eventually reclassified during the adjudication phase." This suggests a consensus-based adjudication by the expert board. The exact protocol (e.g., how disagreements within the expert board were resolved) is not explicitly detailed, but it functions as the final ground truth determination.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC comparative effectiveness study was conducted (the ARTIC study).
- Effect Size of Human Readers' Improvement with AI vs. without AI Assistance:
- Diagnostic Yield: AI-assisted reading (AI+Physician) achieved a diagnostic yield of 73.7% compared to 62.4% for standard reading (without AI), showing an absolute improvement of 11.3 percentage points. This improvement was statistically significant (p=0.015).
- Reading Time: Mean reading time with AI assistance was 3 minutes 50 seconds, significantly faster than 33 minutes 42 seconds for standard reading. This represents a reduction of approximately 88.5% in reading time.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
-
Yes, standalone performance was done for both the lesion detection function and the tract site recognition function.
- Lesion Detection (Standalone):
- Patient-level sensitivity: 98%
- Patient-level specificity: 37%
- Image-level sensitivity: 95.05%
- Image-level specificity: 97.54%
- Tract Site Recognition (Standalone):
- Sensitivity and specificity values for each anatomical site were all above 98%.
- Lesion Detection (Standalone):
-
Important Caveat: The regulatory information states, "In the clinical study of the device, performance (sensitivity and specificity) of the device in the absence of clinician input was not evaluated. Therefore, the AI standalone performance in the clinical study of NaviCam ProScan has not been established." This highlights a distinction between the "standalone algorithm testing" reported in detail and the performance within the clinical use context (i.e., the AI output before a clinician potentially overrides it). The clinical study, ARTIC, primarily evaluates "AI+Physician" performance. The document explicitly notes that the number of false positive predictions from the AI software (in the absence of physician input) in the ARTIC study is unknown.
7. Type of Ground Truth Used
- Standalone Algorithm Testing: Expert consensus (multiple gastroenterologists with arbitration) on individual images and patient cases.
- Clinical Study (ARTIC Study): Expert board reading and adjudication (5 experienced readers) of videos. This essentially serves as an expert consensus ground truth for the clinical effectiveness study.
8. Sample Size for the Training Set
Lesion Detection Function:
- Training Set Sample Size: 1,476 patients (from a dataset of 2,642 patients).
Tract Site Recognition Function:
- Training Set Sample Size: 1,386 patients (from a dataset of 2,642 patients).
9. How Ground Truth for the Training Set Was Established
The ground truth for the training set was established using a multi-expert annotation process:
Lesion Detection Function:
- Pre-Annotation: Full videos were randomly assigned to three gastroenterologists who annotated positive and negative lesion image segments.
- Annotation (Truthing): The sampled image dataset was annotated by the same three gastroenterologists using software. The computer checked for consistency and merged results. For inconsistencies (cutoff value
Ask a specific question about this device
(183 days)
ANKON Technologies Co., Ltd.
The NaviCam Small Bowel Capsule Endoscopy System is intended for visualization of the small bowel mucosa in adults. It may be used as a tool in the detection of abnormalities of the small bowel.
The NaviCam Small Bowel Capsule Endoscopy System is an endoscopic capsule imaging system intended to obtain images of the small bowel. It is comprised of the following components:
- Capsule (AKES-11SW, AKES-11SI): The disposable, ingestible NaviCam Small Bowel Capsule is designed to acquire video images during the natural propulsion through the GI tract. The capsule transmits the acquired images via an RF communication channel to the NaviCam Data Recorder located outside the body.
- Data recorder (AKR-1, AKRI-1): The Data Recorder is an external receiving and recording unit that receives and stores the acquired images from the capsule.
- ESView Software: The ESView is a software application for processing, analyzing, storing, and viewing the acquired images collected from the NaviCam Data Recorder to create a video of the images. The software also includes a reporting function to create detailed clinical reports and a capsule endoscopy atlas.
- Locator: The Locator is a handheld device that is used to turn the NaviCam Capsule on. It is also used for determining if the capsule is still in the body when the patient is not sure whether he/she expelled it.
The NaviCam Small Bowel Capsule Endoscopy System was assessed for its performance primarily through a comparative clinical study against a predicate device, the PillCam SB3 Capsule Endoscopy System, and also through various bench/in-vitro tests.
Here's a breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance
Test Type | Acceptance Criteria (Stated or Implied) | Reported Device Performance |
---|---|---|
Bench/In-Vitro Tests | Successfully passed all listed tests. | |
Biting Test | Ability to withstand applied forces similar to accidental biting. | Pass |
Angular Resolution Test | Measurement of MTF using ISO 12233 slanted edge methodology and new angular resolution method using LEDs. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Temperature Safety Test | Temperature change during operation within safe limits. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
pH Test | Integrity of the capsule during exposure to simulated extreme pH levels. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Image Intensity Uniformity | Uniformity of image intensity. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Image Frame Rate Test | Higher frame rate provides good transmission property. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Geometric Distortion Test | Determination of geometric distortion and local magnification. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Field of View (FOV) Test | Determination of FOV value. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Battery Life Test | Battery life of at least 8 hours and capturing over 57,500 images. | Pass (demonstrated to last at least 8 hours and capture over 57,500 images). |
Image Resolution Test | Testing of image resolution. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
Magnetic Field Test | Measurement of magnetic flux density on capsule surface and non-optical bottom, and determination of safety distance. (Specific thresholds for "pass" are not provided in the document but implied successful meeting of design specifications.) | Pass |
DOV Test | Measurement of MTF in air and underwater at different distances within claimed DOV range using ISO 12233 slanted edge methodology and angular resolution method. (Specific thresholds for "pass" are not provided but implied successful meeting of design specifications.) | Pass |
Color and Gray Scale Test | Evaluation of optical performance. (Specific thresholds for "pass" are not provided but implied successful meeting of design specifications.) | Pass |
Data Integrity Test | Data transmission between capsule, data recorder, and ESView software. (Specific thresholds for "pass" are not provided but implied successful meeting of design specifications.) | Pass |
Clinical Study | ||
Diagnostic Overall Percent Agreement with predicate device | Not explicitly stated as a strict threshold, but achieving substantial equivalence to the predicate device (PillCam SB3) in diagnostic performance. The document implies that the observed agreement rate demonstrates similarity. | 89.66% (81.50%, 94.46%) overall percent agreement with the PillCam SB3. Overall percent agreement Kappa of 0.6652 (0.4653, 0.8652). This was deemed to demonstrate similar performance and effectiveness. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions that the clinical study was a prospective study (NCT05086471).
However, the specific sample size (number of patients or cases) used for the test set is not provided in the given text.
The data provenance (e.g., country of origin) for the clinical study is also not stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This information is not provided in the given text.
4. Adjudication Method for the Test Set
This information is not provided in the given text. The text only states that the NaviCam system was compared to the PillCam SB3 in terms of diagnostic performance, but it doesn't detail how discrepancies or consensus was handled for establishing ground truth or comparing diagnostic findings.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Readers Improvement with AI vs Without AI Assistance
A comparative clinical study was performed, but it was to compare the device's performance (NaviCam Small Bowel Capsule Endoscopy System) to a predicate device (PillCam SB3 Capsule Endoscopy System), not specifically an MRMC study assessing human reader improvement with or without AI assistance. The study evaluates the diagnostic agreement between the two capsule endoscopy systems. Therefore, the effect size of human readers improving with AI vs. without AI assistance is not applicable/not reported in this context.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
The NaviCam Small Bowel Capsule Endoscopy System includes "ESView Software...for processing, analyzing, storing, and viewing the acquired images... The software also includes a reporting function". While the software performs analysis, the clinical study appears to evaluate its overall diagnostic performance in detecting abnormalities, which would typically involve human review of the generated images/reports. The text does not explicitly state if a standalone algorithm-only performance study was conducted separate from human interpretation. The reported "diagnostic Overall Percent Agreement" is for the system, which implies the combined interpretation of the images.
7. The Type of Ground Truth Used
The ground truth for the clinical study is based on the detection of abnormalities of the small bowel by both the NaviCam system and the predicate PillCam SB3 system. The phrase "diagnostic Overall Percent Agreement" implies that the agreement was measured against the findings of another diagnostic tool (the predicate device), which often serves as a form of "ground truth" in equivalence studies when a gold standard (like pathology) is not universally available for every finding. The document does not explicitly state that pathology or outcomes data were used as the definitive ground truth for every finding. It strongly suggests the predicate device's findings were used as the reference point for comparison.
8. The Sample Size for the Training Set
This information is not provided in the given text. The document focuses on the performance study data, not the training dataset for any underlying AI/software components.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the given text, as details about a training set are absent.
Ask a specific question about this device
Page 1 of 1