(137 days)
ContaCT is a notification-only, parallel workflow tool for use by hospital networks and trained clinicians to identify and communicate images of specific patients to a specialist, independent of standard of care workflow.
ContaCT uses an artificial intelligence algorithm to analyze images for findings suggestive of a pre-specified clinical condition and to notify an appropriate medical specialist of these findings in parallel to standard of care image interpretation. Identification of suspected findings is not for diagnostic use beyond notification. Specifically, the device analyzes CT angiogram images of the brain acquired in the acute setting, and sends notifications to a neurovascular specialist that a suspected large vessel occlusion has been identified and recommends review of those images. Images can be previewed through a mobile application.
Images that are previewed through the mobile application are compressed and are for informational purposes only and not intended for diagnostic use beyond notification. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests. ContaCT is limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis.
ContaCT is a notification only, parallel workflow tool installed across the stroke network in healthcare facilities to identify and communicate images and information of specific patients to a neurovascular specialist (b) (4) patients' CT scan. As discussed below, the device facilitates a workflow parallel to the standard of care workflow, and, in the case of a true positive study, results in a notified specialist entering the standard of care workflow earlier.
The device works in parallel to the standard of care workflow. After a CTA has been performed, a copy of the study is automatically sent to and processed by ContaCT. ContaCT performs vessel segmentation and quantifies image characteristics consistent with a Large Vessel Occlusion (LVO) in a large cerebral vessel, and sends a notification based on a fixed threshold to a neurovascular specialist, recommending review of these images. Notifications provide links to preview a compressed version of the identified study on a mobile application.
ContaCT is a software only device that can be segmented into three components: (1) Image Forwarding Software, (2) Image Processing and Analysis Software, and (3) Image Viewing Software.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" in a numerical or target format for sensitivity and specificity. However, it indicates that the observed performance exceeded an unspecified pre-defined goal. The primary performance metrics were sensitivity and specificity related to the device's ability to identify suspected LVOs. For time-to-notification, the goal was to demonstrate a substantial reduction compared to Standard of Care.
Acceptance Criterion (Implied/Observed Goal) | Reported Device Performance (Viz.AI ContaCT) |
---|---|
Sensitivity for LVO Detection (Exceeded pre-defined goal) | 87.8% (95% CI: 81.2% - 92.5%) |
Specificity for LVO Detection (Exceeded pre-defined goal) | 89.6% (95% CI: 83.7% - 93.9%) |
Area Under the ROC Curve (AUC) | 0.91 |
CTA-to-Notification Time (Mean) (Substantially shorter than Standard of Care) | 7.32 minutes (95% CI: 5.51, 9.13) |
CTA-to-Notification Time (Median) (Substantially shorter than Standard of Care) | 5.60 minutes |
Mean Difference in Notification Time vs. Standard of Care | 51.40 minutes (95% CI: 36.32, 58.72) |
Percentage of cases where ContaCT notification was earlier than Standard of Care | 95.5% (42 out of 44 cases) |
2. Sample Size and Data Provenance for the Test Set
- Sample Size: 300 CT angiogram (CTA) images (studies).
- Data Provenance: Obtained from two clinical sites in the U.S.
- Retrospective or Prospective: Retrospective study.
3. Number of Experts and Qualifications for Ground Truth Establishment
- Number of Experts: At least 3 neuro-radiologists were involved in establishing ground truth for individual cases. The initial review was done by an unspecified number of neuro-radiologists, and in cases of disagreement, an "additional neuro-radiologist" provided an opinion, establishing ground truth by majority consensus.
- Qualifications: "Trained neuro-radiologists." No specific years of experience are mentioned, but "trained" implies relevant expertise.
4. Adjudication Method for the Test Set
The adjudication method for establishing ground truth was majority consensus. If the initial neuro-radiologists did not agree on whether a study required further review (i.e., contained an LVO), an additional neuro-radiologist provided an opinion to reach a majority decision. This implies a "2+1" or similar model if the initial review involved two experts, or more if multiple initial reviewers disagreed. The text states "neuro-radiologists did not agree," implying more than one initial reviewer, and then "an additional neuro-radiologist provided an additional opinion and established a ground truth by majority consensus."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document explicitly states: "Viz.AI didn't conduct a clinical reader study for the underlying CAD as the device doesn't have diagnostic outputs other than the notification."
The study design focused on the device's standalone performance (sensitivity, specificity) and its impact on notification time, comparing the device's automated notification time to documented standard of care notification times. It did not evaluate human reader performance with and without AI assistance.
6. Standalone Performance (Algorithm Only) Study
Yes, a standalone performance study was done.
- Methodology: 300 CTA studies were processed by the ContaCT device. The device's output (notification or no notification) was compared against the neuro-radiologist-established ground truth.
- Metrics: Sensitivity, Specificity, and Area Under the Receiver Operating Characteristic (ROC) Curve were calculated based on the device's classification (LVO suspected/not suspected) versus ground truth.
- Time-to-notification: The time taken by ContaCT to generate a notification was measured and compared to documented standard of care notification times.
7. Type of Ground Truth Used
The ground truth used for the test set was expert consensus by "trained neuro-radiologists." Specifically, it was established by neuro-radiologists determining if an image "contained image features consistent with an LVO, and thus required further review," with majority consensus used for disagreements.
8. Sample Size for the Training Set
The document states: "Training CTA studies were used from multiple facilities to develop and train the algorithm." It does not specify the exact sample size for the training set.
9. How Ground Truth for the Training Set was Established
The ground truth for the training set was established through "Training CTA studies... from multiple facilities to develop and train the algorithm." While not explicitly detailed in the same way as the test set ground truth, it can be inferred that it involved expert labeling or a similar process, as is typical for supervised learning algorithms. The text mentions "initial development and training; pre- and post- processing fine-tuning; and threshold optimization," which are phases where ground truth data would be crucial.
§ 892.2080 Radiological computer aided triage and notification software.
(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.