Search Filters

Search Results

Found 16 results

510(k) Data Aggregation

    K Number
    K252586
    Device Name
    CADDIE
    Date Cleared
    2025-09-12

    (28 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251126
    Device Name
    SKOUT system
    Manufacturer
    Date Cleared
    2025-05-09

    (28 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKOUT® system is a software device designed to detect potential colorectal polyps in real time during colonoscopy examinations. It is indicated as a computer-aided detection tool providing colorectal polyps location information to assist qualified and trained gastroenterologists in identifying potential colorectal polyps during colonoscopy examinations in adult patients undergoing colorectal cancer screening or surveillance.

    The SKOUT® system is only intended to assist the gastroenterologist in identifying suspected colorectal polyps and the gastroenterologist is responsible for reviewing SKOUT® suspected polyp areas and confirming the presence or absence of a polyp based on their own medical judgment. SKOUT® is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of treatment/course of action for patients. SKOUT® is indicated for white light colonoscopy only.

    Device Description

    The SKOUT® system is a software-based computer aided detection (CADe) system for the analysis of high-definition endoscopic video during colonoscopy procedures. The SKOUT® system is intended to aid gastroenterologists with the detection of potential colorectal polyps during colonoscopy by providing an informational visual aid on the endoscopic monitor using trained software that processes the endoscopic video in real time.

    Users will primarily interact with the SKOUT® system by observing the software display, including the polyp detection box and device status indicator signal.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the SKOUT® system (K251126) indicates that this submission is for a modified version of a previously cleared device (K241508) and asserts that the "inference algorithms have remained the same, therefore clinical performance remains unchanged from the clinical testing submitted in K213686." This means the detailed clinical performance data and ground truth establishment would likely reside in the K213686 submission, which is not provided in this document.

    However, based on the information present in the K251126 clearance letter, we can describe the non-clinical performance testing that was conducted to demonstrate substantial equivalence to the predicate device. The letter explicitly states:

    "Algorithm performance testing was performed for evaluation of true positives, false positives, and polyp detection time, with an expanded dataset for the added video processing tower."

    This indicates that some form of performance evaluation was done for the algorithm, even if the core clinical performance from K213686 is simply being referenced.

    Given the limitations of the provided document, here's a structured response based on the available information, with clear indications where information is not provided in K251126 and would require reviewing K213686:

    Acceptance Criteria and Device Performance for SKOUT® System (K251126)

    The SKOUT® system (K251126) is a modified version of a predicate device (K241508). The clearance letter explicitly states that "the inference algorithms have remained the same, therefore clinical performance remains unchanged from the clinical testing submitted in K213686." This implies that the core clinical performance metrics and their acceptance criteria were established and met in the K213686 submission.

    For the K251126 submission, non-clinical performance testing was conducted to demonstrate substantial equivalence. This included "Algorithm performance testing... for evaluation of true positives, false positives, and polyp detection time, with an expanded dataset for the added video processing tower." However, the specific quantitative acceptance criteria and the reported performance metrics (true positives, false positives, detection time) are NOT provided in this document.

    1. Table of Acceptance Criteria and Reported Device Performance

    Note: The specific quantitative acceptance criteria and reported device performance (True Positives, False Positives, Polyp Detection Time) for the algorithm from the K251126 submission are NOT explicitly provided in this document. The document refers to the unchanged clinical performance from K213686. The table below represents the types of metrics that would have been evaluated, as stated in the document, but the actual numerical values are missing.

    Performance Metric (Non-Clinical, for K251126)Acceptance Criteria (Not Explicitly Stated in Doc)Reported Device Performance (Not Explicitly Stated in Doc)
    True Positives (Algorithm)Conformance with predicate performance from K213686"Passing results" (qualitative confirmation)
    False Positives (Algorithm)Conformance with predicate performance from K213686"Passing results" (qualitative confirmation)
    Polyp Detection Time (Algorithm)Conformance with predicate performance from K213686"Passing results" (qualitative confirmation)
    Video Delay (Marker Annotation)0 ms0 ms (no standard error, minimum resolution 1.1ms)
    Video Delay (Device)0 ms0 ms (no standard error, minimum resolution 1.1ms)
    Pixel Level DegradationNo degradation introduced to Endoscopic SystemNo pixel level degradation

    2. Sample Size and Data Provenance for Test Set

    • Sample Size Used for Test Set: The document mentions "an expanded dataset for the added video processing tower" for algorithm performance testing. However, the specific sample size (number of cases, images, or polyps) is NOT provided in this document for either the K251126 testing or the referenced K213686 clinical testing.
    • Data Provenance: Not explicitly stated in this document. For K251126, it implies additional video data for specific hardware validation. For the core clinical performance (K213686), this information would be detailed in that submission. It is generally assumed, for FDA clearance, that data would represent diverse patient populations.
    • Retrospective or Prospective: Not explicitly stated in this document for the algorithm performance testing in K251126. Clinical studies underpinning K213686 might have been retrospective or prospective, but this is not discussed in the provided text.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Not provided in this document for either the K251126 testing or the referenced K213686 clinical testing.
    • Qualifications of Experts: Not provided in this document. For ground truth in medical imaging, experts are typically board-certified specialists (e.g., gastroenterologists or pathologists) with significant experience.

    4. Adjudication Method for Test Set

    • Adjudication Method: Not provided in this document. For K251126, the performance testing was focused on validating the algorithm with expanded data, not necessarily re-establishing clinical ground truth for a human-in-the-loop study. For K213686, the adjudication method for ground truth establishment for polyp detection would be detailed in that submission. Common methods include consensus reading (e.g., 2+1, where two readers agree, or a third adjudicates disagreement) or pathology correlation.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Conducted: The document for K251126 does not describe an MRMC comparative effectiveness study directly for this submission. It states that "clinical performance remains unchanged from the clinical testing submitted in K213686." An MRMC study would be common for a device that assists human readers (CADe), and such a study would likely have been part of the original K213686 submission to demonstrate clinical benefit.
    • Effect Size of Human Reader Improvement: Since an MRMC study is not detailed for K251126, the effect size is not provided here. If such a study was performed for K213686, the effect size (e.g., improvement in adenoma detection rate, per-lesion sensitivity) would be reported in that submission.

    6. Standalone (Algorithm Only) Performance

    • Standalone Performance Done: Yes, "Algorithm performance testing was performed for evaluation of true positives, false positives, and polyp detection time." This indicates a standalone evaluation of the algorithm's detection capabilities. However, the specific quantitative metrics (e.g., sensitivity, specificity, FROC curves) are NOT provided in this document. It only states that the testing demonstrated "passing results." The document also notes that "SKOUT® is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of treatment/course of action for patients," reinforcing its role as a CADe tool for human assistance.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The document implies that ground truth for "potential colorectal polyps" was established for the algorithm performance testing. For a device detecting polyps, the most robust ground truth would generally be histopathology (pathology results) from biopsied/resected lesions. Clinical consensus from expert endoscopists is also commonly used, especially for cases where biopsy might not be performed. The document does not explicitly state the type of ground truth used, but for polyp detection, pathology is gold standard.

    8. Sample Size for Training Set

    • Sample Size for Training Set: Not provided in this document. The document primarily focuses on the substantial equivalence and validation of changes rather than a full re-description of the training process for the core AI algorithm, as the algorithm itself is stated to be unchanged from K213686.

    9. How Ground Truth for Training Set Was Established

    • Ground Truth Establishment for Training Set: Not provided in this document. Similar to the test set, the methods for establishing ground truth for the training data (e.g., expert annotations, pathology reports, adjudicated consensus) would have been detailed in the original K213686 submission.

    In summary, the provided FDA clearance letter for K251126 confirms that algorithm performance testing (for true positives, false positives, and detection time) was conducted for this submission, particularly with an "expanded dataset for the added video processing tower." However, it relies on the clinical performance established in the predicate's prior submission (K213686) because the core inference algorithms are unchanged. Therefore, many details regarding the clinical acceptance criteria, sample sizes, expert qualifications, and ground truth establishment for the clinical performance aspect are referred to the previous submission and are not present in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K244023
    Manufacturer
    Date Cleared
    2025-01-24

    (28 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ME-APDS (Magentig Eye's Automatic Polyp Detection System) is intended to be used by endoscopists as an adjunct to the common video colonoscopy procedure (screening and surveillance), aiming to assist the endoscopist in identifying lesions during colonoscopy procedure by highlighting reqions with visual characteristics consistent with different types of mucosal abnormalities that appear in the colonoscopy video during the procedure. Highlighted regions can be independently assessed by the endoscopist and appropriate action taken according to standard clinical practice.

    ME-APDS is trained to process video images which may contain regions consistent with polyps.

    ME-APDS is limited for use with standard white-light endoscopy imaging only.

    ME-APDS is intended to be used as an adjunct to endoscopy procedures and is not intended to replace histopathological sampling as means of diagnosis.

    Device Description

    ME-APDS™MAGENTIQ-COLO is intended to be used as an adjunct to the common video colonoscopy procedure. The system application aims to assist the endoscopist in identifying lesions, such as polyps, during the colonoscopy procedures in real time. The device is not intended to be used for diagnosis or characterization of lesions, and does not replace clinical decision making.

    The system acquires the digital video output signal from the local endoscopy camera and processes the video frames. It runs deep machine learning and additional supporting algorithms in real time on the video frames in order to detect and identify regions having characteristics consistent with different types of mucosal abnormalities such as polyps. The output video with the detected lesions is presented on a separate screen, highlighting the suspicious areas on the original video. The user can also take snapshots of the videos, with and without the highlighting of the suspicious areas, record videos and view in full screen mode.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the MAGENTIQ-COLO device, based on the provided document:

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance metrics, particularly "Polyp-wise Recall" and "False Positives Per Frame (FPPF)". The study aims to demonstrate that the device performs comparably to or better than the predicate device.

    Acceptance Criteria / MetricReported Device Performance (Full Testing Dataset)
    Polyp-wise Recall (PRecall1)97.9% [96.63%, 98.94%]
    Polyp-wise Recall (PRecall3)95.3% [93.39%, 96.96%]
    Polyp-wise Recall (PRecall5)93.2% [91.01%, 95.15%]
    Polyp-wise Recall (PRecall7)90.6% [88.19%, 92.91%]
    False Positives Per Frame (FPPF)0.0333 (threshold achieved)
    Polyps with Histology: PRecall199.7% [99.12%, 100.0%]
    Polyps with Histology: PRecall799.7% [99.11%, 100.0%]
    Median Coverage of Polyps (with histology)81.7%
    Marker Annotation Latency (Median)133 msec for FHD, 157 msec for 4K

    Note: The document states that "The testing results were observed to be as expected and support that the device has similar performance to the predicate device," implying that these reported values met the implicit acceptance criteria for substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): 212 unique full colonoscopy videos, containing 702 polyps (16 videos contained no polyps).
    • Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state the number of experts used to establish the ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience"). However, it references polyps "verified by histology" and "reported in the procedure report," implying clinical expert input.

    4. Adjudication Method for the Test Set

    The document does not describe a specific adjudication method like 2+1 or 3+1. The ground truth seems to be derived from documented polyps in the "procedure report" and "histology findings," suggesting a standard clinical reporting process rather than a specific consensus method for this study.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported in this document. The study described is a standalone performance test of the algorithm. The document mentions that the clinical validation used to support the device's polyp detection functions was conducted in a previous submission (K223473). This K223473 submission might contain an MRMC study, but it's not detailed here.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance study was done. The "Standalone Performance Testing" section describes how "The algorithm was tested offline" on an independent dataset to evaluate its recall, false positive performance, and false positives per full video rate without direct human interaction during the test.

    7. Type of Ground Truth Used

    The ground truth used for the test set was a combination of:

    • Histopathology findings: For polyps with histology reports.
    • Procedure reports: For polyps identified and documented during the colonoscopy procedure.

    8. Sample Size for the Training Set

    The document does not provide the sample size for the training set. It only states that "ME-APDS is trained to process video images which may contain regions consistent with polyps."

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established. It only broadly states that the system "runs deep machine learning" and is "trained to process video images."

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K240044
    Device Name
    CADDIE
    Date Cleared
    2024-07-24

    (201 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CADDIE computer-assisted detection device is intended to assist the gastroenterologist in detecting suspected colorectal polyps only. The gastroenterologist is responsible for reviewing CADDIE suspected polyp areas and confirming the presence or absence of a polyp based on their own medical judgment.

    CADDIE is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of action for patients. The CADDIE computer-assisted detection device is limited for use with standard white-light endoscopy imaging only.

    Device Description

    CADDIE is cloud based artificial intelligence medical device software. CADDIE interfaces with the video feed generated by an endoscopic video processor during a colonoscopy procedure

    The software is intended to be used by trained and qualified healthcare professionals as an accompaniment to video endoscopy for the purpose of drawing attention to regions with visual characteristics consistent with colonic mucosal lesions (such as polyps and adenomas).

    CADDIE analyses the data from the endoscopic video processor in real-time and provides information to aid the endoscopist in detecting suspected colorectal polyps, if they are in the field of view of the endoscope.

    The areas highlighted by CADDIE are not to be interpreted as definite polyps or adenomas. The responsibility to make a decision as to whether or not a highlighted region contains a polyp or is an adenoma lies with the user. The endoscopist is responsible for reviewing CADDIE suspected polyp areas and confirming the presence or absence of a polyp and its classification based on their own medical judgement.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the CADDIE device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Polyp Detection (Standalone Bench-testing Data):

    NameDescriptionAcceptance Criteria (Success Criteria)Reported Device Performance [95% CI]
    Object-level True Positive Rate (TPR)*Proportion of polyps detected by the device (>0.5 seconds & IoU >20%) and confirmed by pathology.> 80% (to show a lower miss rate than in clinical practice of 25%)98.27% [97.33, 99.20]
    Frame-Level False Positive Rate (FPR)*Proportion of frames (%) in which CADDIE detects a box (>0.5 seconds) that is not a histopathologically confirmed polyp.0.5 seconds) that are not histopathology-confirmed polyps.Not applicable
    Frame-Level TPR**Proportion of frames (%) with confirmed polyps in which CADDIE detects the polyp (>0.5 seconds & IoU >20%).Not applicable54.92% [53.02, 56.81]

    *Primary Endpoints; **Secondary Endpoints


    Cecum AI (Standalone Bench-testing Data):

    NameDescriptionAcceptance Criteria (Success Criteria)Reported Device Performance [95% CI]
    Frame-Level true positive rate (TPR)The proportion (%) of all the frames annotated with cecum which the Cecum AI identifies correctly.Frame-level TPR > 80%94.05% [91.58, 96.52]
    Frame-level false positive rate (FPR)*The proportion (%) of all the frames annotated without cecal landmarks which the Cecum AI incorrectly identifies the cecum.Frame-level FPR -15%)CADe: 53.9%, SoC: 53.4% (Difference 0.5% [-5.0%, ∞])

    Conclusion: The device met all stated acceptance criteria in the standalone testing and demonstrated superiority in APC with non-inferiority in PPA in the clinical study.

    2. Sample Size Used for the Test Set and Data Provenance

    Polyp Detection Standalone Bench-testing Dataset:

    • Sample Size (Subjects): 389 subjects.
    • Data Provenance: Not explicitly stated, but the demographics include African American, American Indian, Asian, Caucasian, and Hispanic races/ethnicities, suggesting a diverse multi-region dataset, potentially US-based given the specific racial categories listed. This was a retrospective analysis as it used recorded colonoscopy videos and compared results to historical control (known polyp status per frame).

    Cecum AI Standalone Bench-testing Dataset:

    • Sample Size (Frames): 5092 total frames (2833 positive frames, 2259 negative frames).
    • Data Provenance: Not explicitly stated, but it uses recorded colonoscopy frames, implying a retrospective analysis.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    Polyp Detection Standalone Bench-testing Dataset:

    • Experts: A team of trained clinical annotators initially labeled polyp structures, followed by an additional layer of review by a separate team of experts.
    • Qualifications: The "separate team of experts" had "over 2000 endoscopic procedures experience."

    Cecum AI Standalone Bench-testing Dataset:

    • Experts: A team of trained clinical annotators labeled cecal structures.
    • Qualifications: Not explicitly stated beyond "trained clinical annotators."

    4. Adjudication Method for the Test Set

    Polyp Detection Standalone Bench-testing Dataset:

    • Annotation was performed on a per-frame basis. A "team of trained clinical annotators" labeled polyp structures, followed by an "additional layer of review by a separate team of experts." This indicates a multi-reader review process, likely with a consensus or hierarchical adjudication, though the exact method (e.g., 2+1, 3+1) is not specified.

    Cecum AI Standalone Bench-testing Dataset:

    • Annotation was performed on a per-frame basis by a "team of trained clinical annotators." No additional layer of review or specific adjudication method (like 2+1) is mentioned.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Improvement with AI vs. Without AI Assistance

    • Yes, a prospective, multi-center, MRMC, randomized controlled, parallel group trial was done.
    • Effect Size of Human Improvement with AI vs. Without AI Assistance:
      • Adenomas Per Colonoscopy (APC): CADDIE (with AI) resulted in an APC of 0.82 ± 1.40, while Standard of Care (without AI) had an APC of 0.62 ± 1.19. The ratio of CADe to SoC was 1.33 (95% CI: 1.06, 1.67), meaning 33% more adenomas per colonoscopy were detected with AI assistance.
      • Adenoma Detection Rate (ADR): CADe group had an ADR of 42.9%, SoC had 35.9%. The difference was 7.1% (95% CI: 0.5%, 13.7%), meaning AI assistance led to a 7.1% absolute increase in the proportion of examinations with at least one adenoma detected.
      • AI assistance also led to significant increases in detection of diminutive (≤5 mm) adenomas/adenocarcinomas (29% more) and large (≥10 mm) adenomas/adenocarcinomas (93% more).

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone performance testing was performed for both the Polyp Detection component and the Cecum AI component.
      • For Polyp Detection, a set of recorded colonoscopy videos was analyzed by CADDIE, and the results were compared to historical controls.
      • For Cecum AI, a set of recorded colonoscopy frames were analyzed by Cecum AI, and the results were compared to historical controls.

    7. The Type of Ground Truth Used

    Polyp Detection Standalone Bench-testing:

    • Histology: Each polyp was "histologically confirmed." The ground truth for polyp annotations was based on these confirmed histology reports.

    Cecum AI Standalone Bench-testing:

    • Expert Annotation: Ground truth reference standards were "annotations performed on a per-frame basis, where a team of trained clinical annotators labelled cecal structures with a bounding box."

    Clinical Study (MRMC):

    • Histology/Pathology: The primary and secondary endpoints (APC, PPA, ADR, etc.) were based on "histologically confirmed" findings of polyps, adenomas, adenocarcinomas, and sessile serrated lesions.

    8. The Sample Size for the Training Set

    Polyp Detection Development Datasets:

    • Number of Polyps: 1711 polyps.
    • Number of Patients: 906 patients.
    • Number of Frames: 318,603 frames (162,207 polyp frames and 156,396 non-polyp frames).
      • This dataset was used for training, tuning, and testing (development data, separate from bench-testing data).

    Cecum AI Development Datasets:

    • Number of Patients: 1467 patients.
    • Number of Images: 17,844 images.
      • This dataset was used for training, tuning, and testing (development data, separate from bench-testing data).

    9. How the Ground Truth for the Training Set Was Established

    Polyp Detection Development Datasets:

    • Ground truth was based on a combination of histology (for 1296 polyps from 714 patients) and optical confirmation by additional endoscopists (for 415 polyps confirmed through resection or photo-documentation, but not histopathology).

    Cecum AI Development Datasets:

    • The ground truth for the Cecum AI development dataset was established by using "informative static photo-documentation images, as well as images extracted from videos of cecal landmarks including appendiceal orifice (AO), ileocecal valve (ICV)." While not explicitly stated as "expert annotation" for the training set, this description implies that the landmarks were identified and labeled. The standalone test set confirmed ground truth by "a team of trained clinical annotators," suggesting a similar method for development.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241508
    Device Name
    SKOUT® system
    Date Cleared
    2024-07-03

    (36 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKOUT® system is a software device designed to detect potential colorectal polyps in real time during colonoscopy examinations. It is indicated as a computer-aided detection tool providing colorectal polyps location information to assist qualified and trained gastroenterologists in identifying potential colorectal polyps during colonoscopy examinations in adult patients undergoing colorectal cancer screening or surveillance.

    The SKOUT® system is only intended to assist the gastroenterologist in identifying suspected colorectal polyps and the gastroenterologist is responsible for reviewing SKOUT® suspected polyp areas and confirming the presence or absence of a polyp based on their own medical judgment. SKOUT® is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of treatment/course of action for patients. SKOUT® is indicated for white light colonoscopy only.

    Device Description

    The SKOUT system is a software-based computer aided detection (CADe) system for the analysis of high-definition endoscopic video during colonoscopy procedures. The SKOUT system is intended to aid gastroenterologists with the detection of potential colorectal polyps during colonoscopy by providing an informational visual aid on the endoscopic monitor using trained software that processes the endoscopic video in real time.

    Users will primarily interact with the SKOUT system by observing the software display, including the polyp detection box and device status indicator signal.

    AI/ML Overview

    The provided text describes an FDA 510(k) clearance for the SKOUT® system, a software device designed to detect potential colorectal polyps during colonoscopy. However, it focuses on demonstrating substantial equivalence to a predicate device (K240781), which itself was a predicate to an earlier device (K213686). The current submission (K241508) mainly highlights minor software refinements and states that the "clinical performance remains unchanged from the clinical performance submitted in K213686." Therefore, the details requested about acceptance criteria and the study proving the device meets them would primarily refer to the data supporting K213686, which is not fully detailed in this document.

    Based on the provided K241508 document, here's the information that can be extracted, and where the information is missing:

    1. A table of acceptance criteria and the reported device performance

    The document states, "The inference algorithms the same architecture and meet the same performance requirements as the predicate device, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This implies that the acceptance criteria and reported performance for K241508 are identical to those established for K213686. However, the specific acceptance criteria and numerical performance metrics are not provided in this document.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not explicitly stated for K241508. The document mentions "new data representing 61% of the cumulative data" from 27 new clinical sites compared to the predicate, used for retraining and refinement. However, the size of the test set that explicitly demonstrated performance against acceptance criteria for this specific submission is not detailed. The phrase "clinical performance remains unchanged from the clinical performance submitted in K213686" suggests that the original clinical performance evaluation from K213686 is referenced, but its test set details are not here.
    • Data Provenance: The document states "Utilization of data from 30+ unique clinical sites, of which 27 were new compared to the predicate device." It does not specify the countries of origin or if the data was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not explicitly stated in this document. This information would likely be found in the original K213686 submission.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not explicitly stated in this document. This information would likely be found in the original K213686 submission.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions "The inference algorithms the same architecture and meet the same performance requirements as the predicate device, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This suggests that if such a study was performed, it was for K213686. However, the details of whether an MRMC study was done, its effect size, or human reader improvement are not provided in this document.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document states that the system "is only intended to assist the gastroenterologist" and "is not intended to replace a full patient evaluation." This indicates its role as a human-in-the-loop tool. While standalone performance data might have been collected as part of the technical evaluation, the document does not explicitly describe a standalone performance study as the primary means of demonstrating effectiveness. It alludes to "algorithm performance" being assessed as part of "additional bench software testing" to meet special controls.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not explicitly stated in this document. This information would likely be found in the original K213686 submission. For polyp detection, pathology is a common ground truth, but expert consensus is also frequently used for live video analysis without immediate pathology.

    8. The sample size for the training set

    The document mentions "Utilization of data from 30+ unique clinical sites, of which 27 were new compared to the predicate device, with new data representing 61% of the cumulative data." This composite data was used for "Refinement/retraining of polyp detection algorithm." However, the total numerical sample size (e.g., number of colonoscopies, video frames, or polyps) for the training set is not explicitly stated.

    9. How the ground truth for the training set was established

    Not explicitly stated in this document. This information would likely be found in the original K213686 submission.


    Summary of Missing Information and Recommendation:

    The provided document (K241508) is a 510(k) summary for a modified device. It heavily relies on the performance demonstrated by an earlier predicate device (K213686) by asserting "clinical performance remains unchanged from the clinical performance submitted in K213686." To answer most of your detailed questions regarding acceptance criteria, study design, ground truth establishment, expert qualifications, and specific performance metrics, you would need to access the information contained in the K213686 FDA submission. The current document primarily confirms the substantial equivalence of the modified SKOUT® system (K241508) to its immediate predicate (K240781), which itself points back to K213686 for clinical performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240781
    Device Name
    SKOUT® system
    Date Cleared
    2024-04-19

    (29 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKOUT® system is a software device designed to detect potential colorectal polyps in real time during colonoscopy examinations. It is indicated as a computer-aided detection tool providing colorectal polyps location information to assist qualified and trained gastroenterologists in identifying potential colorectal polyps during colonoscopy examinations in adult patients undergoing colorectal cancer screening or surveillance.

    The SKOUT® system is only intended to assist the gastroenterologist in identifying suspected colorectal polyps and the gastroenterologist is responsible for reviewing SKOUT® suspected polyp areas and confirming the presence or absence of a polyp based on their own medical judgment. SKOUT® is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of treatment/course of action for patients. SKOUT® is indicated for white light colonoscopy only.

    Device Description

    The SKOUT® system is a software-based computer aided detection (CADe) system for the analysis of high-definition endoscopic video during colonoscopy procedures. The SKOUT system is intended to aid gastroenterologists with the detection of potential colorectal polyps during colonoscopy by providing an informational visual aid on the endoscopic monitor using trained software that processes the endoscopic video in real time.

    Users will primarily interact with the SKOUT system by observing the software display, including the polyp detection box and device status indicator signal.

    AI/ML Overview

    The provided document, an FDA 510(k) summary for the SKOUT® system (K240781), primarily focuses on demonstrating substantial equivalence to a predicate device (K230658) and does not contain the detailed acceptance criteria or the specific study results from a primary clinical performance study.

    The document indicates that "the inference algorithms have remained the same, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This suggests that the clinical performance evaluation was conducted for a previous version or submission (K213686), and the current submission relies on that prior assessment.

    Therefore, I cannot provide all the requested information using only the text you provided. The document explicitly states: "Performance data demonstrates that the SKOUT system is as safe and effective as the predicate device." However, it does not explicitly show the full performance data, acceptance criteria, sample sizes, or ground truth establishment details for that primary performance study (K213686).

    Based on the provided text, here is what can be extracted and what information is missing:

    Information Extracted from the Provided Text:

    • Device Performance Reported: The document states that "SKOUT system demonstrated passing results in all applicable testing." and "Performance data demonstrates that the SKOUT system is as safe and effective as the predicate device."
    • Adjudication Method: "None" is inferred for the listed "Performance Testing" which are non-clinical tests (Software verification and validation, bench software testing). For the clinical performance from K213686, the adjudication method is not described in this document.
    • Standalone Performance: The non-clinical testing described seems to be for algorithm-only performance ("bench software testing was performed to confirm the device meets the special controls in 21 CFR 876.1520 for true and false positives, pixel degradation and video delays."). However, the specific metrics (e.g., sensitivity, specificity for polyp detection) are not reported here.
    • Ground Truth Type: For the non-clinical testing, the "ground truth" seems to be defined by the design requirements and special controls for software (e.g., "true and false positives"). For the clinical performance (K213686), the type of ground truth is not specified.

    Missing Information (Not Present in the Provided Text):

    • A table of acceptance criteria and the reported device performance: While it states "passing results," the specific numerical acceptance criteria and the corresponding numerical performance values are not provided.
    • Sample sized used for the test set and the data provenance: Not described for the underlying clinical performance study (K213686).
    • Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not described for the underlying clinical performance study (K213686).
    • If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not described for the underlying clinical performance study (K213686). The device is a CADe system, which suggests human-in-the-loop, but specific MRMC study results are not here.
    • The type of ground truth used (expert concensus, pathology, outcomes data, etc): Not described for the underlying clinical performance study (K213686).
    • The sample size for the training set: Not described.
    • How the ground truth for the training set was established: Not described.

    Based on the provided document, here's a structured response (with noted limitations):

    The provided document (FDA 510(k) Summary for SKOUT® system, K240781) primarily asserts substantial equivalence to a predicate device (K230658) and refers to prior performance data from K213686. It does not contain the detailed acceptance criteria or the specific study results from the primary clinical performance evaluation.

    The document states: "Performance data demonstrates that the SKOUT system is as safe and effective as the predicate device. Thus, the SKOUT system is substantially equivalent." and "the inference algorithms have remained the same, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This implies that the definitive study proving device performance against acceptance criteria was conducted for the K213686 submission, not detailed within this current document.

    Here's what can be gathered:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document mentions "Additional bench software testing was performed to confirm the device meets the special controls in 21 CFR 876.1520 for true and false positives, pixel degradation and video delays." and "SKOUT system demonstrated passing results in all applicable testing." However, the specific numerical acceptance criteria (e.g., minimum sensitivity, maximum false positives per minute) and the quantified reported device performance values against these criteria are not provided in this document.

    2. Sample Size and Data Provenance (for the test set):

    Not explicitly stated for the underlying clinical performance study (K213686). The "Performance Testing" section describes non-clinical software verification and validation, which usually involves test cases rather than patient sample sizes.

    3. Number of Experts and Qualifications for Ground Truth:

    Not explicitly stated for the underlying clinical performance study (K213686).

    4. Adjudication Method for the Test Set:

    Not explicitly stated for the underlying clinical performance study (K213686). For the "Performance Testing" described in this document (non-clinical bench software testing), an adjudication method is not applicable in the human-reader sense.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not provide information about an MRMC comparative effectiveness study or the effect size of human readers improving with AI vs. without AI assistance. While the device is a Computer-Aided Detection (CADe) tool, which inherently assists human readers, the details of such a study are absent here.

    6. Standalone (Algorithm Only) Performance:

    The document states "Additional bench software testing was performed to confirm the device meets the special controls in 21 CFR 876.1520 for true and false positives, pixel degradation and video delays." This indicates that the algorithm's performance in detecting polyps and managing system lags was tested independently. However, the specific metrics (e.g., standalone sensitivity, specificity, or FPs/min rate) from this testing are not numerically reported in this document.

    7. Type of Ground Truth Used:

    For the clinical performance (referred to as K213686), the type of ground truth (e.g., expert consensus, pathology, follow-up outcomes) is not specified in this document. For the non-clinical performance testing, the ground truth is defined by the design requirements and regulatory standards for "true and false positives, pixel degradation and video delays."

    8. Sample Size for the Training Set:

    Not provided in this document.

    9. How the Ground Truth for the Training Set Was Established:

    Not provided in this document.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The GI Genius™ system is a computer-assisted reading tool designed to aid endoscopists in detecting colonic mucosal lesions (such as polyps and adenomas) in real time during standard white-light endoscopy examinations of patients undergoing screening and surveillance endoscopic mucosal evaluations. The GI Genius™ computer-assisted detection device is limited for use with standard white-light endoscopy imaging only. This device is not intended to replace clinical decision making.

    Device Description

    GI Genius is an artificial intelligence-based device that has been trained to process colonoscopy images containing regions consistent with colorectal lesions like polyps, including those with flat (non-polypoid) morphology.

    GI Genius is composed of software (namely, ColonPRO™ 4.0) and hardware (namely, GI Genius™ Module 100 and 200).

    GI Genius™ Module 100 and 200 are compatible with Video Processors featuring SDI (SMPTE 259M) or HD-SDI (SMPTE 292M) output ports and endoscopic display monitors featuring SDI (SMPTE 259M) or HD-SDI (SMPTE 292M) input ports. GI Genius™ Module 200 is also compatible with Video Processors featuring the 4K UHD standard.

    The GI Genius system is connected between the video processor and the endoscopic display monitor. When first switched on, the endoscopic field of view is clearly identified by four corner markers, and a blinking green square indicator appears on the connected endoscopic display monitor to state that the system is ready to function.

    During live video streaming of the endoscopic video image, GI Genius generates a video output on the endoscopic display monitor that contains the original live video together with superimposed green square markers that will appear when a polyp or other lesion of interest is detected, accompanied by a short sound. These markers will not be visible when no lesion detection occurs.

    The operating principle of the subject device is identical to that of the predicate device, this being a computerassisted detection device used in conjunction with endoscopy for the detection of abnormal lesions in the gastrointestinal tract. This device with advanced software algorithms brings attention to images to aid in the detection of lesions. The device includes hardware to support interfacing with video endoscopy systems and the accessories given by the footswitch and the USB K-switch.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the GI Genius™ Module 100, GI Genius™ Module 200, and ColonPRO™ 4.0, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" for each performance metric, but it does present a comparison table that shows the performance of the Subject Device (ColonPRO™ 4.0) against its Predicate Device (GI Genius™ System 100 and 200). The implication is that the subject device's performance, being "improved" or "same" compared to the already cleared predicate, meets the necessary equivalence for clearance.

    CharacteristicAcceptance Criteria (Implied: at least as good as predicate)Reported Device Performance (Subject Device - ColonPRO™ 4.0)Comparison to Predicate (Performance of Predicate)
    Lesion-based sensitivity≥ 86.5%88.07 %Improved (86.5 %)
    Frame-level True Positive≥ 269,223277,738Improved (269,223)
    Frame-level True NegativeFor 150 videos/338 polyps: ≥ 5,239,1285,248,406Improved (5,239,128)
    Frame-level False PositiveFor 150 videos/338 polyps: ≤ 104,66995,391Improved (104,669)
    Frame-level False NegativeFor 150 videos/338 polyps: ≤ 192,567184,052Improved (192,567)
    True positive rate per frameMean: ≥ 58.30 %, % of polyps: 100 %Mean: 60.14 %, % of polyps: 100 %Improved (Mean: 58.30 %, % of polyps: 100 %)
    False positive rate per frameMean: ≤ 1.96 %Mean: 1.79 %Improved (Mean: 1.96 %)
    Frame-Based TPr/FPr ROC curve, AUC≥ 0.7960.826Improved (0.796)
    False positive clusters per patient500 ms: ≤ 11500 ms: 10Improved ( 500 ms: 11)
    Video delay, signal in to signal outAs per predicate (1.52 µs for Module 100, 0.74 µs for Module 200)1.52 µs (Module 100), 0.74 µs (Module 200)Same

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document explicitly states the frame-level performance was assessed using 150 videos / 338 polyps. It doesn't specify if this refers to the number of patients or individual lesions.
    • Data Provenance: The document does not provide information on the country of origin of the data or whether it was retrospective or prospective. It only mentions that "the baseline clinical validation for the subject device was conducted and reviewed in DEN200055 and is still applicable." Since this is a Special 510(k) for a software update (version 4.0.0 replacing 3.0.2), the primary performance data seems to derive from the re-training of the neural network rather than a new clinical study. The "Non-clinical testing" section mentions that "Tests according to the Standalone Performance Testing Protocol v2.0, submitted as part of the K231143 predicate device submission, have been repeated for the applicable parts of the subject device." This suggests the test set for this submission is the same as, or comparable to, that used for the predicate.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number of experts used or their qualifications for establishing the ground truth on the test set.

    4. Adjudication Method for the Test Set

    The document does not specify the adjudication method used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention or present an MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance. The device is described as a "computer-assisted reading tool," suggesting it's intended to work alongside an endoscopist, but no study on human performance improvement with the device is provided in this submission or summary. It refers to the "baseline clinical validation" for the predicate device, but the details of that validation are not present here.

    6. Standalone Performance Study (Algorithm Only)

    Yes, a standalone performance study was done. The performance metrics listed in the table (Lesion-based sensitivity, Frame-level True Positive/Negative/False Positive/Negative, True positive rate per frame, False positive rate per frame, Frame-Based TPr/FPr ROC curve, AUC, False positive clusters per patient) all refer to the algorithm's performance without direct human-in-the-loop interaction for the purpose of these specific measurements. The "Non-clinical testing" section explicitly states: "Tests according to the Standalone Performance Testing Protocol v2.0, submitted as part of the K231143 predicate device submission, have been repeated for the applicable parts of the subject device."

    7. Type of Ground Truth Used

    The document implies the ground truth for polyps and lesions was used to evaluate detection performance. However, it does not explicitly state the method for establishing this ground truth (e.g., expert consensus, pathology, outcome data). Likely, for lesion detection in endoscopic videos, ground truth would typically be established by expert endoscopist review, potentially confirmed by pathology for detected lesions.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set. It only mentions "retraining of the neural network" as the source of improved detection performance for ColonPRO™ 4.0.

    9. How the Ground Truth for the Training Set Was Established

    The document does not specify how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230751
    Date Cleared
    2023-12-15

    (273 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This software is a computer-assisted reading tool designed to aid endoscopists in detecting colonic mucosal lesions (such as polyps and adenomas) in real time during standard endoscopy examinations of patients undergoing screening and surveillance endoscopic mucosal evaluations. This software is used with standard White Light Imaging (WLI) and Linked Color Imaging (LCI) endoscopy imaging. This software is not intended to replace clinical decision making.

    Device Description

    The subject device represents application of AI technology to endoscopic images to assist in detecting the presence of potential lesions. This development greatly contributes to improving the quality of colonoscopy. In recent years, computer-aided diagnosis (CAD) systems employing AI technologies have been approved and marketed as radiological medical devices for use with computed tomography (CT), X-ray, magnetic resonance imaging (MRI), and mammogram diagnostic images. In endoscopy as well, many images for diagnosis are taken. Since increasing the polyp detection rate is also in demand, CAD systems for endoscopy are being actively developed. Against this background, the company has developed this software (EW10-EC02), a new AI-based CAD system, to support Health Care Provider (HCP) detection of large intestine polyps in colonoscopic images. EW10-EC02 detects suspected large intestine polyps in the endoscope video image in real-time.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study findings for the EW10-EC02 Endoscopy Support Program, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes two main types of studies: standalone performance testing (evaluating the algorithm only) and clinical testing (evaluating human-in-the-loop performance). The acceptance criteria for the standalone performance are explicitly stated and met, while the clinical study endpoints serve as the criteria for evaluating the device's clinical benefit when assisting human readers.

    Standalone Performance Acceptance Criteria & Results:

    ItemAcceptance Criteria (Implicit, based on "achieved all criteria")Reported Performance WLI ModeReported Performance LCI Mode
    Sensitivity per lesion (Lesion-based sensitivity)Exceeds a defined lower limit of the 95% CI (Specific value not provided but stated as met)95.1% (91.1 - 98.3% CI)95.5% (91.5 - 98.7% CI)
    FP Objects/Patient (Number of FPc per Case)(Specific criteria not numerically stated, but described as "achieved all criteria")1.42 (1.09 - 1.81 CI)0.76 (0.42 - 1.21 CI)
    Detection Persistence (Figure 1)(Implicit: Robust correlation of detection persistence with sensitivity and FP objects/patient)Demonstrated strong correlationDemonstrated strong correlation
    Frame-level performance(Implicit: Acceptable values for TP, TN, FP, FN, sensitivity/frame, FPR/frame)(See Table 7 for detailed values)(See Table 7 for detailed values)
    ROC AUC(Implicit: High accuracy)0.79 (0.77-0.80 CI)0.87 (0.86-0.88 CI)
    FROC Analysis(Implicit: Supports performance)(See Figure 4)(See Figure 4)

    Clinical Study Endpoints & Results (serving as criteria for human-in-the-loop):

    EndpointAcceptance Criteria (Implicit: Superiority for APC or meeting margins for PPV; non-inferiority for FPR)Reported Performance (CAC group vs. CC group)P-Value / CI
    Primary Endpoints
    Adenoma per colonoscopy (APC)Superiority (CAC vs. CC)CAC: 0.990 ± 1.610; CC: 0.849 ± 1.4840.018 (Superiority met)
    Positive predictive value (PPV)Meeting margin of -9.56%CAC: 48.6%; CC: 54.0%-9.56%, -1.48% (Margin met)
    Positive percent agreement (PPA)(Implicit: Acceptable performance)CAC: 60.7%; CC: 66.2%-10.50%, -2.30%
    Secondary Endpoints of Note
    Polyp per colonoscopy (PPC)(Implicit: Acceptable performance, P-value
    Ask a Question

    Ask a specific question about this device

    K Number
    K223473
    Manufacturer
    Date Cleared
    2023-07-25

    (250 days)

    Product Code
    Regulation Number
    876.1520
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QNP

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ME-APDS (Magentig Eye's Automatic Polyp Detection System) is intended to be used by endoscopists as an adjunct to the common video colonoscopy procedure (screening and surveillance), aiming to assist in identifying lesions during colonoscopy procedure by highlighting regions with visual characteristics consistent with different types of mucosal abnormalities that appear in the colonoscopy video during the procedure. Highlighted regions can be independently assessed by the endoscopist and appropriate action taken according to standard clinical practice.

    The ME-APDS is trained to process video images which may contain regions consistent with polyps.

    The ME-APDS is limited for use with standard white-light endoscopy imaging only.

    The ME-APDS is intended to be used as an adjunct to endoscopy procedures and is not intended to replace histopathological sampling as means of diagnosis.

    Device Description

    The ME-APDS (Magentig Eve's Automatic Polvo Detection System) is intended to be used as an adjunct to the common video colonoscopy procedure. The system application aims to assist the endoscopist in identifying lesions, such as polyps, during the colonoscopy procedures in real time. The device is not intended to be used for diagnosis or characterization of lesions, and does not replace clinical decision making.

    The system acquires the digital video output signal from the local endoscopy camera and processes the video frames. It runs deep machine learning and additional supporting algorithms in real time on the video frames in order to detect and identify regions having characteristics consistent with different tvpes of mucosal abnormalities such as polyps. The output video with the detected lesions is presented on a separate touchscreen, supplied as part of the ME-APDS, highlighting the suspicious areas on the original video. The output of the system can also be presented on additional monitors in the procedure room using the 1x4 HDMI Splitter supplied with the system. The user can also take snapshots of the videos, with and without the highlighting of the suspicious areas, record videos and view in full screen mode.

    AI/ML Overview

    Acceptance Criteria and Study Details for ME-APDS™

    1. Table of Acceptance Criteria and Reported Device Performance:

    MetricAcceptance Criteria (Stated or Implied)Reported Device Performance (ME-APDS™)
    Standalone Performance:
    Polyp-wise Recall (Polyps with Histology)Not explicitly stated, but high recall across consecutive frames is implied for adequate aid.PRecall1: 100.0%
    PRecall3: 99.6%
    PRecall5: 99.6%
    PRecall7: 99.6%
    Polyp-wise Recall (Entire Testing Dataset)Not explicitly stated, but high recall across consecutive frames is implied for adequate aid.PRecall1: 98.2%
    PRecall3: 94.2%
    PRecall5: 91.5%
    PRecall7: 90.0%
    False Positives Per Full Video (FPPF)FPPF threshold of 0.0328 (normalized to 15 minutes)Met the FPPF threshold of 0.0328
    Marker Annotation LatencyNot explicitly stated, but real-time performance is a key feature.Median: 0.166 sec (5 frames)
    Average: 0.85 sec
    Robustness (IoU threshold variation)Robust performance with varying IoU thresholds up to 0.2.Changing IoU from 0.01 to 0.1 and 0.2 "slightly influenced only the framewise recall, and did not influence the other results supporting the robustness of the testing."
    Clinical Performance (Comparative Effectiveness):
    Adenomas Per Colonoscopy (APC)MEAC APC expected to be >1.05 compared to CC APC (lower limit of 95% CI of MEAC/CC ratio).MEAC APC: 0.70
    CC APC: 0.51 (Implied ratio: 0.70/0.51 ≈ 1.37, which is > 1.05)
    Adenomas Per Extraction (APE)APE of MEAC expected to be non-inferior to APE of CC (lower limit of 95% CI of difference between APE of MEAC and CC expected to be above -0.20).MEAC APE: 0.31
    CC APE: 0.27 (Difference: 0.04. The document states "APE proved non-inferior to that of CC," indicating the criterion was met. Actual CI not explicitly given in table for overall APE difference but for subgroups).

    2. Sample Size Used for the Test Set and Data Provenance:

    • Standalone Performance Testing:

      • Sample Size: 172 unique full colonoscopy videos, containing 449 polyps (16 videos contained no polyps).
      • Data Provenance: Not explicitly stated for each video, but the videos covered various demographic factors (subject sex, age, race). Given that the clinical study data was collected from "10 medical centers in Europe, the United States and Israel," it is highly probable that the standalone testing data also originates from a similar diverse geographical pool. The context suggests it is likely retrospective video data collected from past procedures.
    • Clinical Testing (Comparative Effectiveness Study):

      • Sample Size: 950 patients enrolled (916 patients for baseline demographics). The treatment arms were:
        • CC (Conventional Colonoscopy): 398 patients
        • CC-MEAC (CC followed by MEAC): 69 patients
        • MEAC (ME-APDS-assisted Colonoscopy): 385 patients
        • MEAC-CC (MEAC followed by CC): 64 patients
      • Data Provenance: A randomized, two-arm, multi-center, controlled study conducted at 10 medical centers in Europe, the United States, and Israel. This is a prospective study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Standalone Performance Testing: The document does not explicitly state the number of experts or their qualifications for establishing the ground truth specifically for the standalone test set videos. It mentions "Polyps evaluated varied by subject sex... 263 polyps had histology findings..." which implies that histology was used for ground truth. The polyps being "reported in the procedure report" suggests the involvement of clinical experts (endoscopists) at the time of procedure.

    • Clinical Testing (Comparative Effectiveness Study): The ground truth for polyps (number of adenomas, extractions) was established by the endoscopists performing the colonoscopies within each arm of the study. These would be qualified medical professionals (endoscopists) at the participating clinical centers. The document does not specify their exact years of experience or the number of individual experts beyond the "10 medical centers." The use of "histology" for confirmation of adenomas is also mentioned implicitly in the APE definition, meaning pathology experts contribute to the final ground truth.

    4. Adjudication Method for the Test Set:

    • Standalone Performance Testing: Not explicitly stated. The mention of "polyps verified by histology" implies that a pathologist's report served as the ultimate ground truth. It does not describe an adjudication process between multiple readers of the videos for annotation or ground truth establishment.

    • Clinical Testing (Comparative Effectiveness Study): No specific adjudication method across multiple independent experts is described for determining the ultimate ground truth in the clinical study. The number of adenomas and extractions were recorded during the colonoscopy procedures, with histology confirming the nature of extracted polyps. The "tandem" design (CC followed by MEAC or MEAC followed by CC) in a subset of patients implicitly allows for a comparison of findings within the same patient, acting as an internal check.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and effect size of human readers improvement with AI vs without AI assistance:

    • Yes, a comparative effectiveness study was done.
    • Study Design: A randomized, two-arm, multi-center, controlled study comparing conventional colonoscopy (CC) with ME-APDS-assisted colonoscopy (MEAC). This is a clinical trial, not a typical MRMC study where multiple readers interpret cases for diagnostic accuracy. However, it still assesses the effectiveness of human readers with AI assistance versus without.
    • Effect Size:
      • Adenomas Per Colonoscopy (APC): MEAC APC was 37% higher (relative increase) than CC APC.
        • MEAC APC: 0.70
        • CC APC: 0.51
        • Absolute difference: 0.70 - 0.51 = 0.19 adenomas per colonoscopy.
      • The study found "a mean 0.20 increment between arms for each analyzed subgroup" for APC.
      • MEAC was "more effective than CC in detecting ≤5mm polyps and in detecting >6-9 mm polyps, sessile and flat polyps and adenomas in the proximal colon."
      • "more sessile serrated adenomas (SSAs) were identified in MEACs as compared to CCs, which resulted in also a higher sessile serrated detection rate (SDR)."

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance testing was done. This is detailed under "Standalone Performance Testing" on page 5.

    7. The type of ground truth used:

    • Standalone Performance Testing: The ground truth was primarily based on histology findings for 263 polyps and implicitly based on the endoscopist's procedure report for the remaining polyps identified in the videos.
    • Clinical Testing (Comparative Effectiveness Study): The ground truth was established by recorded adenoma detections and extractions by endoscopists during the procedures, with the definitive diagnosis of adenomas confirmed by histopathology results.

    8. The sample size for the training set:

    The document does not explicitly state the sample size (number of videos or polyps) used for the ME-APDS training set. It only mentions that the system "runs deep machine learning and additional supporting algorithms" and "is trained to process video images."

    9. How the ground truth for the training set was established:

    The document does not explicitly describe how the ground truth for the training set was established. However, given the nature of the device and the testing methodologies, it is highly probable that the training data would have been meticulously annotated by clinical experts (e.g., experienced gastroenterologists or endoscopists) to label polyps, potentially with subsequent pathological confirmation for cases where tissue was removed. This common practice ensures high-quality ground truth for training medical AI models.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2