(15 days)
The INTIO ClearStart.SVM™ system is a self-contained image analysis desktop workstation addressing the needs of physicians performing diagnostic oncologic imaging, treatment planning, and post-procedure or systemic therapy follow-up assessment. ClearStart.SVM™ provides semi-automated tools for segmentation of suspicious lesions including primary and metastatic lung and liver tumors, and lymph node assessment using non-contrast and contrast CT images. Following lesion segmentation, ClearStart.SVM™'s automated volumetric, RECIST and WHO lesion measurements provide the user with data on the time-course of patient response to therapy. An on-board large disk storage capacity allows the user to easily track each patient's CT data from initial diagnosis through therapeutic interventions and follow up exams and includes a reporting package to aid in the assessment of response to therapy.
The ClearStart•SVM™ product consists of a custom-configured desktop computer system consisting of high-performance commercial-off-the-shelf computer components, and INTIO proprietary software running on a Linux operating system. CT data is read into the system from a CD or DVD and stored uncompressed into a database which allows easy retrieval for follow up assessments.
The ClearStart.SVM™ system uses data from contrast and non-contrast CT examinations of patients presenting with solid tumors. SVM segments the turnor within the organ and determines measurements such as longest tumor length in axial views and tumor volume. Diagnostic CT scans are viewed in both multiplanar reformatted (MPR) views and 3D volume rendered (VR) views, allowing the user to choose the best visualization of selected tumors to be treated. Typically tumors are best visualized in the contrast CT exam which is used for analysis. User selected lesions are segmented with minimal user input. The user places a cursor over the tumor and by clicking a mouse button inserts one or more region-growing seed points in the tumor to be segmented. Feedback of the seed location is shown in graphical overlays on the MPR views. After the seed or seeds are placed, the user activates a computer determined bounding box surrounding the marked lesion, with the surrounding tissue demonstrating a contrast difference to the marked tumor. The lesion is then segmented using 3D active contour methodology.
As a follow up to a specific therapy the ClearStart+SVM™ system provides tools to help assess the effectiveness of image-guided loco-regional or systemic therapies for solid or semi-solid tumors throughout the body. Users are able to review changes in tumor dimensions based on follow up CT scans with computer-assisted segmentation and automated measurement of lesions to determine tumor response to therapy over time. The ClearStart SVM system is designed to semi-automatically segment and automatically measure tumors of the liver, lung, and lymph nodes.
INTIO engineers have developed ClearStart-SVM™'s image processing and display software which provides both 2D (MPR) image display and 3D volume-rendered (VR) display capability. Response of tumors to therapy, either loco-regional or systemic, is facilitated by lesion segmentation and automated measurement leveraging INTIO's proprietary image segmentation algorithm. The automated measurements include both conventional and specific quantitative measurements such as RECIST, WHO and volumetric CT measurements.
The provided 510(k) summary for INTIO ClearStart.SVM™ does not contain acceptance criteria or detailed study results that prove the device meets specific performance metrics.
The document focuses on demonstrating substantial equivalence to a predicate device (Siemens syngo CT Oncology) by comparing features and outlining the general testing performed for safety and efficacy. It describes the design and intended use but lacks the quantitative data typically found in performance studies.
Here's a breakdown of the information that is available and the information that is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
This information is largely missing from the provided text. The document describes the device's functionality (semi-automated segmentation, automated volumetric measurements for liver, lung, and lymph nodes) but does not provide specific numerical acceptance criteria (e.g., accuracy, precision, dice score thresholds) or quantitative performance results against those criteria.
Acceptance Criteria | Reported Device Performance |
---|---|
Not provided. The document focuses on feature comparison and general safety/efficacy testing, not explicit performance metrics for segmentation or measurement accuracy. | Not provided. No specific numerical results (e.g., accuracy percentages, precision values) are given for segmentation or volumetric measurement. The document states that internal and third-party testing was performed to "assure that the system is safe and efficacious" and that "the system design specifications were realized," but these are general statements, not quantifiable performance metrics. |
2. Sample Size Used for the Test Set and Data Provenance
This information is missing. The summary states that "internal and third party testing was performed" and that "Medical professionals... validated the software workflow and usability," but it does not specify:
- The number of cases or images used in any test sets.
- The country of origin for any data used.
- Whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is missing. The document does not mention how ground truth was established for any performance testing, nor does it specify the number or qualifications of experts involved in such a process. It only states that "Medical professionals who are familiar with similar systems validated the software workflow and usability." This is a general statement about usability validation, not ground truth establishment for quantitative performance.
4. Adjudication Method for the Test Set
This information is missing. No details are provided on any adjudication methods (e.g., 2+1, 3+1) for establishing ground truth or resolving expert discrepancies in a test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a MRMC comparative effectiveness study is not mentioned. The document focuses on comparing the device's features to a predicate device and general safety/efficacy testing. There is no mention of a study comparing human readers with and without AI assistance, or any effect size in terms of improvement.
6. If a Standalone (Algorithm Only) Performance Study Was Done
A standalone performance study, with specific quantitative results, is not explicitly detailed. While the device features "semi-automated tools for segmentation" and "automated volumetric, RECIST and WHO lesion measurements," the summary does not present a standalone performance study with metrics like sensitivity, specificity, Dice score, or measurement error against a defined ground truth. The testing described is more aligned with software verification and usability validation.
7. The Type of Ground Truth Used
This information is missing. The document does not specify the type of ground truth used for any performance evaluation (e.g., expert consensus, pathology, outcomes data).
8. The Sample Size for the Training Set
This information is missing. The document does not mention any training data or its size, which is common for AI/ML-based devices. Given the 2011 submission date, this device might have used more traditional image processing techniques rather than deep learning, and thus a "training set" might not be applicable in the modern sense. However, the use of "3D active contour methodology" suggests some fitting or parameter optimization, which would imply a dataset for development.
9. How the Ground Truth for the Training Set Was Established
This information is missing. As with the training set size, details on how ground truth was established for any development or training data are not provided.
In summary, the provided 510(k) summary focuses on demonstrating substantial equivalence through feature comparison and general statements about safety and efficacy testing, rather than presenting detailed quantitative performance studies with specific acceptance criteria, sample sizes, ground truth methodologies, or expert involvement. This is common for predicate-based 510(k) submissions, especially those from an earlier era (2011), where extensive clinical performance data or AI-specific validation might not have been required to the same extent as for devices submitted today.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).