(36 days)
The SKOUT® system is a software device designed to detect potential colorectal polyps in real time during colonoscopy examinations. It is indicated as a computer-aided detection tool providing colorectal polyps location information to assist qualified and trained gastroenterologists in identifying potential colorectal polyps during colonoscopy examinations in adult patients undergoing colorectal cancer screening or surveillance.
The SKOUT® system is only intended to assist the gastroenterologist in identifying suspected colorectal polyps and the gastroenterologist is responsible for reviewing SKOUT® suspected polyp areas and confirming the presence or absence of a polyp based on their own medical judgment. SKOUT® is not intended to replace a full patient evaluation, nor is it intended to be relied upon to make a primary interpretation of endoscopic procedures, medical diagnosis, or recommendations of treatment/course of action for patients. SKOUT® is indicated for white light colonoscopy only.
The SKOUT system is a software-based computer aided detection (CADe) system for the analysis of high-definition endoscopic video during colonoscopy procedures. The SKOUT system is intended to aid gastroenterologists with the detection of potential colorectal polyps during colonoscopy by providing an informational visual aid on the endoscopic monitor using trained software that processes the endoscopic video in real time.
Users will primarily interact with the SKOUT system by observing the software display, including the polyp detection box and device status indicator signal.
The provided text describes an FDA 510(k) clearance for the SKOUT® system, a software device designed to detect potential colorectal polyps during colonoscopy. However, it focuses on demonstrating substantial equivalence to a predicate device (K240781), which itself was a predicate to an earlier device (K213686). The current submission (K241508) mainly highlights minor software refinements and states that the "clinical performance remains unchanged from the clinical performance submitted in K213686." Therefore, the details requested about acceptance criteria and the study proving the device meets them would primarily refer to the data supporting K213686, which is not fully detailed in this document.
Based on the provided K241508 document, here's the information that can be extracted, and where the information is missing:
1. A table of acceptance criteria and the reported device performance
The document states, "The inference algorithms the same architecture and meet the same performance requirements as the predicate device, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This implies that the acceptance criteria and reported performance for K241508 are identical to those established for K213686. However, the specific acceptance criteria and numerical performance metrics are not provided in this document.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated for K241508. The document mentions "new data representing 61% of the cumulative data" from 27 new clinical sites compared to the predicate, used for retraining and refinement. However, the size of the test set that explicitly demonstrated performance against acceptance criteria for this specific submission is not detailed. The phrase "clinical performance remains unchanged from the clinical performance submitted in K213686" suggests that the original clinical performance evaluation from K213686 is referenced, but its test set details are not here.
- Data Provenance: The document states "Utilization of data from 30+ unique clinical sites, of which 27 were new compared to the predicate device." It does not specify the countries of origin or if the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not explicitly stated in this document. This information would likely be found in the original K213686 submission.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not explicitly stated in this document. This information would likely be found in the original K213686 submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document mentions "The inference algorithms the same architecture and meet the same performance requirements as the predicate device, therefore clinical performance remains unchanged from the clinical performance submitted in K213686." This suggests that if such a study was performed, it was for K213686. However, the details of whether an MRMC study was done, its effect size, or human reader improvement are not provided in this document.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document states that the system "is only intended to assist the gastroenterologist" and "is not intended to replace a full patient evaluation." This indicates its role as a human-in-the-loop tool. While standalone performance data might have been collected as part of the technical evaluation, the document does not explicitly describe a standalone performance study as the primary means of demonstrating effectiveness. It alludes to "algorithm performance" being assessed as part of "additional bench software testing" to meet special controls.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not explicitly stated in this document. This information would likely be found in the original K213686 submission. For polyp detection, pathology is a common ground truth, but expert consensus is also frequently used for live video analysis without immediate pathology.
8. The sample size for the training set
The document mentions "Utilization of data from 30+ unique clinical sites, of which 27 were new compared to the predicate device, with new data representing 61% of the cumulative data." This composite data was used for "Refinement/retraining of polyp detection algorithm." However, the total numerical sample size (e.g., number of colonoscopies, video frames, or polyps) for the training set is not explicitly stated.
9. How the ground truth for the training set was established
Not explicitly stated in this document. This information would likely be found in the original K213686 submission.
Summary of Missing Information and Recommendation:
The provided document (K241508) is a 510(k) summary for a modified device. It heavily relies on the performance demonstrated by an earlier predicate device (K213686) by asserting "clinical performance remains unchanged from the clinical performance submitted in K213686." To answer most of your detailed questions regarding acceptance criteria, study design, ground truth establishment, expert qualifications, and specific performance metrics, you would need to access the information contained in the K213686 FDA submission. The current document primarily confirms the substantial equivalence of the modified SKOUT® system (K241508) to its immediate predicate (K240781), which itself points back to K213686 for clinical performance.
§ 876.1520 Gastrointestinal lesion software detection system.
(a)
Identification. A gastrointestinal lesion software detection system is a computer-assisted detection device used in conjunction with endoscopy for the detection of abnormal lesions in the gastrointestinal tract. This device with advanced software algorithms brings attention to images to aid in the detection of lesions. The device may contain hardware to support interfacing with an endoscope.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use, including detection of gastrointestinal lesions and evaluation of all adverse events.
(2) Non-clinical performance testing must demonstrate that the device performs as intended under anticipated conditions of use. Testing must include:
(i) Standalone algorithm performance testing;
(ii) Pixel-level comparison of degradation of image quality due to the device;
(iii) Assessment of video delay due to marker annotation; and
(iv) Assessment of real-time endoscopic video delay due to the device.
(3) Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device.
(4) Performance data must demonstrate electromagnetic compatibility and electrical safety, mechanical safety, and thermal safety testing for any hardware components of the device.
(5) Software verification, validation, and hazard analysis must be provided. Software description must include a detailed, technical description including the impact of any software and hardware on the device's functions, the associated capabilities and limitations of each part, the associated inputs and outputs, mapping of the software architecture, and a description of the video signal pipeline.
(6) Labeling must include:
(i) Instructions for use, including a detailed description of the device and compatibility information;
(ii) Warnings to avoid overreliance on the device, that the device is not intended to be used for diagnosis or characterization of lesions, and that the device does not replace clinical decision making;
(iii) A summary of the clinical performance testing conducted with the device, including detailed definitions of the study endpoints and statistical confidence intervals; and
(iv) A summary of the standalone performance testing and associated statistical analysis.