Search Results
Found 1 results
510(k) Data Aggregation
(115 days)
CapsoCam Plus (SV-3) Capsule Endoscope System
The CapsoCam® Plus video capsule system is intended for visualization of the small bowel mucosa in adults. It may be used as a tool in the detection of abnormalities of the small bowel.
CapsoCam Plus (SV-3) Capsule Endoscope System is a single-use, ingestible video capsule that acquires and stores video images in on-board memory while moving through the gastrointestinal tract, propelled by natural peristalsis. The patient retrieves the capsule using the provided retrieval kit and returns it to the physician who downloads and reviews the images on a computer. The capsule is typically excreted within 3 to 30 hours after swallowing. The device is contraindicated in patients:
- · Who have known or suspected gastrointestinal obstructions, strictures or fistula
- · Who are pregnant
- · Who have gastroparesis
- · Who have a swallowing disorder
CapsoCam Plus (SV-3) capsule endoscope system is a single-use ingestible capsule system for diagnostic visualization of the adult small bowel. The overall system consists of an ingestible CapsoCam (SV-3), the CapsoRetrieve® (CVR1) Capsule Retrieval Kit, the CapsoAccess® Capsule Data Access System (CDAS) and the CapsoView® (CVV) software. The capsule contains a panoramic color digital video camera, two silver-oxide watch batteries, white-LED light sources, a laser diode for data download and system-control and nonvolatile flashmemory data-storage electronics.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the comparison to the predicate device (CapsoCam SV-1) and the studies conducted to demonstrate equivalence and improved performance. However, explicit numerical acceptance criteria are not presented in a formal table within this document. The studies' results are presented as evidence of meeting performance expectations relative to the predicate and overall diagnostic quality.
Acceptance Criterion | Reported Device Performance (CapsoCam Plus SV-3) | Reference |
---|---|---|
Diagnostic Quality | 40 out of 42 subjects (95.2%) showed "Yes" for Image Diagnostic Quality. | Study 1, Demographics, Image Diagnostic Quality table |
Small Bowel Completeness | 41 out of 42 subjects (97.6%) showed "Yes" for "Small Bowel Complete". | Study 1, Demographics, Small Bowel Complete table |
Pathology Identification | Various pathologies identified (Angiectasia: 2/40, Ulcer: 2/40, Other: 4/38). None considered clinically significant in Study 1. Consensus on vascular (5/42) and polyp/mass (1/42) in Study 2. | Study 1, Pathologies Identified table; Study 2, Consensus on pathologies |
Landmark Identification | High identification rates for 1st Esophageal (40/43), 1st Gastric (42/42), 1st Duodenal (42/42), 1st Cecal (41/42), Papilla (35/42). | Study 1, Landmarks Identified table |
Overall Image Quality (vs. Predicate) | 28 out of 40 clips (70%) ranked SV-3 image as better than SV-1. 12 (30%) ranked as comparable. 0 (0%) ranked SV-1 better. | Study 3, Results for Video Image Quality |
Clinical Assessment Quality (vs. Predicate) | 1 out of 40 clips (2.5%) ranked SV-3 image as better than SV-1. 39 (97.5%) ranked as comparable. 0 (0%) ranked SV-1 better. | Study 3, Results for Clinical Assessment Quality |
Software Ease of Use (vs. Predicate) | 1 out of 40 clips (2.5%) ranked SV-3 software as easier. 39 (97.5%) ranked as equal. 0 (0%) ranked SV-1 software as easier. | Study 3, Ease of use |
Reviewing Experience (vs. Predicate) | 2 out of 40 (5%) ranked SV-1 software as worse. 38 (95%) ranked as the same. 0 (0%) ranked SV-1 software as better. | Study 3, Reviewing Experience |
Consensus with Predicate Software (Study 3) | 100% agreement between old and new versions of software when consensus was reached. 90.9% consensus for landmarks (10/11 clips). 86.2% consensus for significant clinical pathology (25/29 clips). | Study 3 |
2. Sample Size Used for the Test Set and Data Provenance
- Study 1 (CVI-006 "Validation of CapsoCam® SV-3 Capsule Endoscopy System"):
- Sample Size: 42 "Per-Protocol" subjects (from 49 enrolled healthy volunteers).
- Data Provenance: Prospective, healthy volunteers, country not specified but implied to be in a clinical trial setting.
- Study 2 (Comparisons of reads of select video clips from CVI-006 by Independent Blinded readers):
- Sample Size: Not explicitly stated as a number of subjects, but refers to "images of landmarks and pathologies" extracted from the 42 subjects of Study 1. The tables show N=42 for overall consensus.
- Data Provenance: Retrospective analysis of clips from the prospective Study 1.
- Study 3 (Comparison of SV-1 and SV-3 software performance):
- Sample Size: 40 video clips (11 normal, 29 with pathology) from the original SV-1 study.
- Data Provenance: Retrospective analysis of clips from an earlier SV-1 study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Study 1: The "principal investigator" assessed landmarks, completion of exam, pathologies, and image quality. The number of principal investigators is not specified (implied to be one for reporting purposes). Their qualifications are not explicitly detailed but should be a medical professional qualified to read capsule endoscopy studies.
- Study 2: "Independent Blinded readers" assessed landmark video quality, pathologies, and subjective questions. The number of independent readers is not explicitly stated in detail for consensus; however, the "Consensus Amongst readers with video clips of Landmarks" table implies multiple readers reached consensus. An example note clarifies for "discordant finding was identical for all 3 readers" in Study 3, suggesting at least 3 readers were involved in such assessments. Their qualifications are not explicitly detailed again, but should be experts in capsule endoscopy interpretation.
- Study 3: "Independent blinded readers" were used. As noted above, the text mentions "all 3 readers" for the discordant findings, suggesting 3 readers were involved in comparing the software versions. Their qualifications are not explicitly detailed.
4. Adjudication Method for the Test Set
- Study 1: The Principal Investigator made the assessments. No explicit adjudication method among multiple initial readers is mentioned for this phase.
- Study 2 & 3: "Consensus Amongst readers" and "Consensus agreement" are mentioned for various aspects. This implies an adjudication process where readers' opinions were reconciled into a single agreed-upon finding. While the specific mechanism (e.g., majority vote, discussion to consensus, expert adjudicator) is not detailed, it indicates a method was used to arrive at a single 'ground truth' for these multi-reader assessments. For Study 3, it notes that "The discordant finding was identical for all 3 readers," which means even where there wasn't full agreement with the original finding, the readers were consistent with each other.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, Study 3 involved multiple readers comparing the performance of the SV-3 software to the SV-1 software on the same set of video clips. This constitutes a form of MRMC study in terms of software comparison.
- Effect Size of Human Readers Improve with AI vs. without AI Assistance:
- This document describes a comparison between two versions of a device/software (SV-1 vs SV-3 software), not AI assistance for human readers versus human readers alone. The SV-3 software is an improvement to the device's image processing and viewing capabilities.
- The results show improvement in image quality preference: "28 (70%) Ranked the SV-3 image as better than the SV-1 image; 12 (30%) Ranked both SV-3 image and SV-1 image as comparable; 0 (0%) Ranked the SV-1 image as better than the SV 3-image."
- For clinical assessment quality, "39 (97.5%) Ranked both SV-3 image and SV-1 image as comparable; 1 (2.5%) Ranked the SV-3 image as better than the SV-1 image."
- This indicates the newer device/software improved image quality, but there was largely comparable clinical assessment quality, and slight improvements in ease of use and reviewing experience for a small percentage of readers.
6. Standalone Performance Study
- Yes, Study 1 appears to be a standalone performance study for the CapsoCam Plus (SV-3) capsule endoscope system. It assesses the device's diagnostic quality, small bowel completeness, and ability to identify pathologies and landmarks in healthy volunteers using the SV-3 device alone. The results in the tables for Study 1 directly reflect the performance of the SV-3 system.
7. Type of Ground Truth Used
- Expert Consensus/Clinical Assessment:
- Study 1: The ground truth for pathologies, landmarks, diagnostic quality, and completeness was established by the "Principal Investigator's" clinical assessment during the study.
- Study 2 & 3: The ground truth for landmark and pathology identification in these studies appears to be based on "Consensus Amongst readers" or agreement with the "original pathologies identified" from the prior studies. This points to expert consensus as the primary ground truth.
8. Sample Size for the Training Set
- The document does not provide information about a training set. This document describes clinical validation studies for the device itself and its software, not the development or training of an AI algorithm. If the device incorporates AI, the training data for that AI is not detailed here.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned for an AI algorithm, this information is not available in the provided text.
Ask a specific question about this device
Page 1 of 1