K Number
K211788
Device Name
HALO
Manufacturer
Date Cleared
2021-07-08

(29 days)

Product Code
Regulation Number
892.2080
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

HALO is a notification only cloud-based image processing software artificial intelligence algorithms to analyze patient imaging data in parallel to the standard of care imaging interpretation. Its intended use is to identify suggestive imaging patterns of a pre-specified condition and to directly notify an appropriate medical specialist.

HALO's indication is to facilitate the evaluation of the brain vasculature on patients suspected of stroke by processing and analyzing CT angiograms of the brain acquired in an acute setting. After completion of the data analysis, HALO sends a notification if a pattern suggestive for a suspected intracranial Large Vessel Occlusion (LVO) of the anterior circulation (ICA, M1 or M2) has been identified in an image.

The intended users of HALO are defined as medical specialists or a team of specialists that are involved in the diagnosis and care of stroke patients at emergency department where stroke patients are administered. The include physicians such as neurologists, radiologists, and/or other emergency department physicians.

HALO's output should not be used for primary diagnosis or clinical decisions; the final diagnosis is always decided upon by the medical specialist. HALO is indicated for CT scanners from GE Healthcare and Philips.

Device Description

HALO is a notification only, cloud-based clinical support tool which identifies image features and communicates the analysis results to a specialist in parallel to the standard of care workflow.

HALO is designed to process CT angiograms of the brain and facilitate evaluation of these images using artificial intelligence to detect patterns suggestive of an intracranial large vessel occlusion (LVO) of the anterior circulation.

A copy of the original CTA images is sent to HALO cloud servers for automatic image processing. After analyzing the images, HALO sends a notification regarding a suspected finding to a specialist, recommending review of these images. The specialist can review the results remotely in a compatible DICOM web viewer.

AI/ML Overview

Here's a detailed breakdown of the acceptance criteria and study proving the device meets them, based on the provided FDA 510(k) summary for HALO:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Primary Endpoints:
LVO Detection Sensitivity91.3% (95% CI, 86.6%-94.8%)
LVO Detection Specificity85.9% (95% CI, 80.6%-90.2%)
Area Under the Curve (AUC) for LVO Detection0.97
Secondary Endpoints:
Median Notification Time for Detected LVOs4 minutes 29 seconds (minimum 3:47, maximum 7:12)

The document states that "The HALO performance with regard to sensitivity and specificity, and the notification time are both equivalent to that of the selected predicate device." This implies that the reported performance metrics met or exceeded the established criteria for substantial equivalence to the predicate.

2. Sample Size and Data Provenance

  • Test Set Sample Size: 427 patients after exclusions (originally 434 CTA scans).
  • Data Provenance: Retrospective, multi-center clinical study. Patients were admitted to US comprehensive stroke centers.

3. Number and Qualifications of Experts for Ground Truth

  • Number of Experts: 3 neuro radiologists.
  • Qualifications: "Expert panel consisting of 3 neuro radiologists." Specific details on years of experience or board certification are not provided in this document.

4. Adjudication Method for the Test Set

The document states: "Ground truth was established by an expert panel consisting of 3 neuro radiologists." While it doesn't explicitly detail the adjudication method (e.g., 2+1, 3+1, consensus discussion), the wording suggests a consensus-based approach among the three experts. "Established by" implies a final, agreed-upon determination, not individual readings.

5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

No MRMC comparative effectiveness study involving human readers with vs. without AI assistance is mentioned in the provided text for this specific device clearance. The study described focuses on the standalone performance of the AI algorithm.

6. Standalone (Algorithm Only) Performance

Yes, a standalone performance study was done. The reported sensitivity, specificity, and AUC are all metrics of the algorithm's performance without human intervention in the diagnosis and notification process. The intended use of HALO is to "directly notify an appropriate medical specialist" if a suspected finding is identified, running "in parallel to the standard of care imaging interpretation." This means its function is to flag cases for specialist review, not to replace it.

7. Type of Ground Truth Used

The ground truth used was expert consensus among three neuro radiologists, based on their interpretation of the CTA scans.

8. Sample Size for the Training Set

The document does not specify the sample size used for the training set. It only mentions the test set of 427 patients. It alludes to the algorithm using "a database of images" for its AI model but provides no numbers for this database's size or composition regarding training.

9. How Ground Truth for the Training Set Was Established

The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test set. It is common practice for training data ground truth to be established through expert labeling or other robust methods, but this information is not provided here.

§ 892.2080 Radiological computer aided triage and notification software.

(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.