(162 days)
Cina is a radiological computer aided triage and notification software in the analysis of (1) not-enhanced head CT images and (2) CT angiography of the head.
The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communicating suspected positive findings of (1) head CT images for Intracranial Hemorthage (ICH) and (2) head CT angiography for large vessel occlusion (LVO) of the anterior circulation (distal ICA, MCA-M1 or proximal MCA-M2). Cina uses an artificial intelligence algorithm to analyze images and highlight cases with detected (1) ICH or (2) LVO on a standalone Web application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH or LVO findings.
Notifications include compressed preview images that are meant for informational purposes only, and are not intended for diagnostic use beyond notification. The device does not alter the original medical image, and it is not intended to be used as a diagnostic device.
The results of Cina are intended to be used in conjunction with other patient information and based on professional judgement to assist with triage/prioritization of medical images. Notified clinicians are ultimately reviewing full images per the standard of care.
Cina is a radiological computer-assisted triage and notification software device.
The software system is based on algorithm-programmed components and is comprised of a standard off-the-shelf operating system and additional image processing applications.
DICOM images are received, recorded and filtered before processing. The series are processed chronologically by running algorithms on each series to detect suspicious results of an intracranial hemorrhage (ICH) or a large vessel occlusion (LVO), then notifications on the flagged series are sent to the Worklist Application.
The Worklist Application (on premise) displays the pop-up notifications of new studies with suspected findings when they come in, and provides both active and passive notifications. Active notifications are in the form of a small pop-up containing patient name, accession number and the type of suspected findings (ICH or LVO). All the non-enhanced head CT images and head CT angiography studies received by Cina device are displayed in the worklist and those on which the algorithms have detected a suspected finding (ICH or LVO) are marked with an icon (i.e., passive notification). In addition, a compressed, small black and white image that is marked "not for diagnostic use" is displayed as a preview function. This compressed preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification. Presenting the radiologist with notification facilitates earlier triage by allowing one to prioritize images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and study details for the Cina device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Performance Goal) | Cina-ICH Reported Performance | Cina-LVO Reported Performance |
---|---|---|---|
Sensitivity | 80% | 91.4% (95% Cl: 87.2% – 94.5%) | 97.9% (95% Cl: 94.6% - 99.4%) |
Specificity | 80% | 97.5% (95.8% – 98.6%) | 97.6% (95% Cl: 95.1% - 99%) |
AUC (ROC) | Not explicitly stated, but comparable to predicate | 0.94 | 0.98 |
Overall Agreement (Accuracy) | Not explicitly stated | 95.6% | 97.7% |
Time-to-notification (Mean ± SD) | Not explicitly stated, but comparable to predicate | 13.2 ± 2.9 seconds | 25.8 ± 7.0 seconds |
2. Sample Size Used for the Test Set and Data Provenance
- ICH Test Set: 814 clinical anonymized cases
- LVO Test Set: 476 clinical anonymized cases
- Data Provenance: Retrospective, multinational study from 3 clinical sources (2 US and 1 OUS).
- For LVO positive cases: 156 (83%) were US and 32 (17%) OUS.
- The datasets contained a sufficient number of cases from important cohorts regarding imaging acquisitions (scanner makers: GE, Siemens, Philips, Toshiba/Canon; number of detector rows, gantry tilt, and slice thickness) and patient groups (age, sex, and US regions).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3)
- Qualifications of Experts: US-board-certified neuroradiologist readers. (Specific years of experience not mentioned).
4. Adjudication Method for the Test Set
- Method: Concurrence of three US-board-certified neuroradiologist readers. (This implies a consensus or majority agreement method, often referred to as "3+0" or "majority rule" for establishing ground truth.)
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The provided document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how human readers improve with or without AI assistance. The study described focuses on the standalone performance of the algorithm.
6. Standalone Performance Study
- Yes, a standalone (algorithm only) performance study was done. The sections "IX. Performance Testing" and "IX.2. Performance Testing" explicitly describe the evaluation of the Cina software's performance (Sensitivity, Specificity, AUC, Accuracy, and Time-to-notification) directly against the established ground truth.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus (established by the concurrence of three US-board-certified neuroradiologist readers, evaluating imaging findings). The text refers to "operators' visual assessments" which, in this context, refers to the expert readers' visual interpretation used to define the ground truth.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It mentions the algorithm uses "an artificial intelligence algorithm with database of images" but does not provide the size of this database or how it was used for training versus testing.
9. How Ground Truth for the Training Set Was Established
- The document does not explicitly state how the ground truth for the training set was established. It only describes the method for the ground truth of the test set (concurrence of three neuroradiologists).
§ 892.2080 Radiological computer aided triage and notification software.
(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.