K Number
K222884
Manufacturer
Date Cleared
2023-03-02

(161 days)

Product Code
Regulation Number
892.2080
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Rapid NCCT Stroke is a radiological computer aided triage and notification software indicated for use in the analysis of (1) nonenhanced head CT (NCCT) images. The device is intended to assist hospital networks and trained clinicians in workflow triage by flagging and communicating suspected positive findings of (1) head CT images for Intracranial Hemorrhage (ICH) and (2) NCCT large vessel occlusion (LVO) of the ICA and MCA-M1.

Rapid NCCT Stroke uses an artificial intelligence algorithm to analyze images and highlight cases with detected (1) ICH or (2) NCCT LVO on the Rapid server on premise or in the cloud in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH or LVO findings via PACS, email or mobile device. Notifications include compressed preview images that are meant for informational purposes only, and are not intended for diagnostic use beyond notification.

The device does not alter the original medical image, and it is not intended to be used as a primary diagnostic device. The results of Rapid NCCT Stroke are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images. Notified clinicians are ultimately responsible for reviewing full images per the standard of care. Rapid NCCT Stroke is for Adults only.

Device Description

Rapid NCCT Stroke (RNS) is a radiological computer-assisted triage and notification software device. RNS is a non-enhanced CT (NCCT) processing module which operates within the integrated Rapid Platform to provide triage and notification of suspected intracranial hemorrhage (ICH) and NCCT Large Vessel Occlusion (LVO) of the ICA and MCA-M1. The RNS is an AI/ML SaMD. The output of the module is a priority notification to clinicians indicating the suspicion of ICH or NCCT LVO. ICH analysis uses the ICH Algorithm to identify findings within the ICH algorithm; and the NCCT LVO suspicion uses the combined analysis of the ASPECTS and Hyperdense Vessel Sign (HVS) algorithms. The RNS module uses the basic services supplied by the Rapid Platform including DICOM processing, job management, imaging module execution and imaging output including the notification and compressed image.

AI/ML Overview

The Rapid NCCT Stroke device is a radiological computer-aided triage and notification software for detecting intracranial hemorrhage (ICH) and large vessel occlusion (LVO) on non-enhanced head CT (NCCT) images.

Here's an analysis of its acceptance criteria and the study that proves it:

1. Table of Acceptance Criteria and Reported Device Performance

Feature/MetricAcceptance Criteria (Implicit from Study Results & Claims)Reported Device Performance (ICH Algorithm)Reported Device Performance (LVO Algorithm)
Sensitivity (ICH)High, consistent with standalone module performance0.962N/A
Specificity (ICH)High, consistent with standalone module performance0.974N/A
Sensitivity (LVO)≥ 0.544 (Lower 95% CI reported)N/A0.635
Specificity (LVO)≥ 0.891 (Lower 95% CI reported)N/A0.951
Expert Non-Inferiority (LVO)Device performance non-inferior to human readersN/ASensitivity for all readers: 0.436; Difference in Sensitivity (device vs. all readers): 0.199 (95% CI: 0.055-0.34)
Non-Expert Superiority (LVO)Device performance superior to general radiologistsN/ASensitivity for general radiologists: 0.409; Difference in Sensitivity (device vs. general radiologists): 0.226 (95% CI: 0.071-0.381)
Time-to-Notification (vs. SoC)Significantly faster than standard of care time-to-exam-openMean: 2.5 minutesMean: 2.5 minutes

2. Sample Sizes and Data Provenance

  • Test Set Sample Size: 254 cases. These cases included:
    • ICH Positive: 26
    • LVO Positive: 115
    • Negative for ICH and LVO: 103
    • Excluded: 10 (due to age and technical inadequacy)
  • Data Provenance: The study was a "retrospective, blinded, multicenter, multinational study." This indicates that the data was collected from multiple centers in various countries and that the analysis was performed on existing, pre-collected data. Specific countries are not mentioned.

3. Number of Experts and Qualifications for Ground Truth

  • Ground Truth Establishment: The document mentions "expert reader truthing of the data." The specific number of experts is not explicitly stated for the ground truth establishment, but it is implied that multiple experts were involved given "expert reader truthing."
  • Qualifications of Experts: The document refers to "human readers" including "neuroradiologists and general radiologists" in the context of the secondary clinical endpoints. This suggests that the experts involved in establishing ground truth would likely possess similar qualifications in radiology, with expertise in neurological imaging, to accurately identify ICH and LVO.

4. Adjudication Method for the Test Set

The document does not explicitly describe an adjudication method like 2+1 or 3+1. It states that the ground truth was established by "expert reader truthing." This implies that a consensus or a well-defined process was used by the experts to determine the definitive diagnoses, but the specific mechanics of that process (e.g., number of readers, tie-breaking rules) are not detailed.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

Yes, a form of MRMC comparative effectiveness study was done for LVO.

  • Effect Size of Human Readers' Improvement with AI vs. without AI Assistance: The study did not directly assess how much human readers improve with AI assistance. Instead, it compared the standalone performance of the Rapid NCCT Stroke device to the performance of human readers (both general radiologists and a broader group of "all readers," which included experts) in identifying LVO.
    • Expert Non-inferiority: The device demonstrated non-inferiority to "overall readers" (experts and non-experts). The device's sensitivity was 0.635, while the sensitivity for "all readers" was 0.436. The difference in sensitivity (device vs. all readers) was 0.199 (95% CI: 0.055-0.34), indicating the device performed better than the overall human readers.
    • Non-expert Superiority: The device demonstrated superiority to "general radiologists". The device's sensitivity was 0.635, while the sensitivity for general radiologists was 0.409. The difference in sensitivity (device vs. general radiologists) was 0.226 (95% CI: 0.071-0.381), indicating the device performed better than general radiologists.
    • These results show that the standalone device performed better than human readers in terms of sensitivity for LVO detection. The study design doesn't provide an effect size for human reader improvement with AI assistance (i.e., a human-in-the-loop scenario).

6. Standalone (Algorithm Only) Performance Study

Yes, an algorithm-only standalone performance study was done.

  • The reported sensitivities and specificities for ICH (Se: 0.962, Sp: 0.974) and LVO (Se: 0.635, Sp: 0.951) refer to the standalone performance of the Rapid NCCT Stroke device.
  • The ICH algorithm's performance was noted to be "consistent with the ICH standalone module performance (K221456)," further confirming standalone evaluation.
  • The comparison against human readers (secondary clinical endpoints) also used the device's standalone output for comparison.

7. Type of Ground Truth Used

The ground truth used was expert consensus (referred to as "expert reader truthing of the data").

8. Sample Size for the Training Set

The document does not provide the sample size for the training set. It only describes the test set used for performance validation.

9. How Ground Truth for the Training Set Was Established

The document does not provide information on how the ground truth for the training set was established. It focuses solely on the validation study and the ground truth for its test set.

§ 892.2080 Radiological computer aided triage and notification software.

(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.