K Number
K221564
Manufacturer
Date Cleared
2023-02-23

(268 days)

Product Code
Regulation Number
892.2060
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Brainomix 360 e-ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.

The software automatically registers images and uses an Atlas to segment and analyze ASPECTS regions. Brainomix 360 e-ASPECTS extracts image data from individual voxels in the image to provide analysis and computer and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECTS (Alberta Stroke Program Early CT) Score.

Brainomix 360 e-ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. Brainomix 360 e-ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 6 hours from time last known well).

Brainomix 360 e-ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. Brainomix 360 e-ASPECTS additionally provides a visualization of the voxels contributing to the automated ASPECTS score and the voxels excluded from the automated ASPECTS score.

Limitations:

  1. Brainomix 360 e-ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.

  2. Brainomix 360 e-ASPECTS has been validated in patients with known MCA or ICA occlusion prior to ASPECTS scoring.

  3. Brainomix 360 e-ASPECTS is not suitable for use on brain scans displaying neurological pathologies other than acute stroke, such as tumours or abscesses, haemorrhagic transformation and hematoma.

  4. Use of Brainomix 360 e-ASPECTS Module in clinical settings other than brain ischemia within 6 hours from time last known well, caused by known ICA or MCA occlusions has not been tested.

  5. Brainomix 360 e-ASPECTS has only been validated and is intended to be used in patient populations aged over 21.

  6. Brainomix 360 e-ASPECTS has been validated and is intended to be used on Siemens Somatom Definition scanners.

  7. Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Images viewed on a mobile platform are compressed preview images and not for diagnostic interpretation.

Contraindications/Exclusions/Cautions:

· Patient motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.

  • · Haemorrhagic Transformation, Hematoma.
Device Description

Brainomix 360 e-ASPECTS is a stand-alone software device which uses machine learning algorithms to automatically process NCCT (Non-contrast CT scans) brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.

The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. e-ASPECTS provides an automatic ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgment.

Brainomix 360 e-ASPECTS can connect with other DICOM-compliant devices, for example to transfer NCCT scans from a Picture Archiving and Communication System (PACS) to Brainomix 360 e-ASPECTS software for processing.

Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via a web user interface on any machine contained and accessed within a hospital network and firewall and with a connection to the Brainomix 360° e-ASPECTS software (e.g. a LAN connection)

Brainomix 360 e-ASPECTS notification capabilities enable clinicians to preview images through via email notification with result image attachments.

lmages that are previewed via e-mail are compressed, are for preview purposes only, and not intended for diagnostic use beyond notification.

Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests.

Brainomix 360 e-ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.

Once received, image processing is automatically applied. Once any image processing has completed, notifications are sent to pre-configured users to inform that the image processing results are ready. User can then access and review the results and images via the Web User Interface case viewer or PACS viewer.

Brainomix 360 e-ASPECTS principal workflow for NCCT includes the following key steps:

  • NCCT Image Loading.
  • . Automated image analysis and processing to identify and visualize the voxels which have been included in the ASPECTS score and the voxels which have been excluded from the ASPECTS score (Also referred to as a 'heat map').
  • Automated image analysis and processing to register the subject image to an atlas to segment and highlight ASPECTS regions and to display whether or not each region is qualified as contributing to the ASPECTS score.
  • . Notifications and alerts to users.
  • Generation of a summary results report.
  • Presentation of results for review and analysis by users.

Once the physician has been notified of availability of the ASPECTS score, the system requires that the physician confirms that the case in question is for an ICA occlusion. The ASPECTS results, including the ASPECTS score, indication of affected side, affected ASPECTS regions and voxel-wise analysis (shown as a heatmap of voxels 'contributing to e-ASPECTS score' and a heat map of voxels 'excluded from e-ASPECTS score') can be exported as a report and/or sent to the Picture Archiving and Communications System (PACS).

AI/ML Overview

Here's a summary of the acceptance criteria and the study that proves the Brainomix 360 e-ASPECTS device meets those criteria, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are not explicitly listed in a separate table in the provided text. However, the "Stand-alone Performance Testing" and "Clinical Studies" sections describe performance metrics that were evaluated. I will infer the acceptance criteria from the reported performance which was deemed sufficient for substantial equivalence.

Acceptance Criteria (Inferred from reported performance)Reported Device Performance (Brainomix 360 e-ASPECTS)
Stand-alone Performance:
Overall AUC for ASPECTS scoring83% (81-85, 95% CI)
Sensitivity for ASPECTS scoring68% (57-72)
Specificity for ASPECTS scoring97% (86-98)
Generalizable performance across demographicsConsistent performance in subgroups dichotomized by median age and defined by sex. Performance slightly lower (but not statistically significant) in non-proximal vessel occlusion subgroup (AUC 78% vs 84%).
Consistent performance for different ASPECTS regionsConsistent performance between grouped cortical and grouped basal ganglia ASPECTS regions. Performance was lower in M4, M6, and internal capsule regions (not statistically significant).
Correlation between e-ASPECTS heatmaps and hypodensitiesr >= 0.95 (for volumes of e-ASPECTS heatmaps and synthetic hypodensities in digital phantom data)
MRMC Clinical Study (Reader Performance Improvement):
Statistically significant improvement in AUCStatistically significant improvement of 0.02 from 0.81 to 0.83 (p=0.028) when scoring with Brainomix 360 e-ASPECTS assistance.
Increase in Sensitivity (Positive Percentage Agreement)Increased from 66% to 70% (with e-ASPECTS assistance).
Improvement in Specificity (Negative Percentage Agreement)Small improvement to 96% (with e-ASPECTS assistance).
Improvement in Overall Percentage Agreement (Accuracy)Improved from 93% to 94% (with e-ASPECTS assistance).
Consistency across reader groupsSubgroup analysis based on clinical training (radiologist vs. neurologist) demonstrates a consistent impact. Greater AUC increases for "lower performers," smaller changes for "high performers," leading to a narrower range in AUC between users and reduced variation in performance. Impact consistent in deep and cortical ASPECTS regions.

2. Sample Size Used for the Test Set and Data Provenance

  • Stand-alone Performance Test Set:
    • Sample Size: 256 non-contrast CT scans.
    • Data Provenance: Retrospective data from 8 different USA institutions. Patients were admitted between March 2014 and March 2020.
  • MRMC Clinical Study Test Set:
    • Sample Size: 54 clinically representative NCCT retrospective scans.
    • Data Provenance: Retrospective data. While not explicitly stated, the context of clinical studies for FDA clearance typically implies multi-center or diverse data sources to demonstrate generalizability, but the exact origin (e.g., country) beyond "retrospective" for this specific set is not detailed.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Stand-alone Performance Test Set: The text states "Ground truth ASPECTS score was 6 in 213 patients." but it does not explicitly state the number of experts or their qualifications used to establish this ground truth. However, the subsequent MRMC study provides this detail about its ground truth.
  • MRMC Clinical Study Test Set:
    • Number of Experts: Three expert neuroradiologists.
    • Qualifications: "expert neuroradiologists with access to clinical data and follow up imaging." (Further details on years of experience are not provided.)

4. Adjudication Method for the Test Set

  • Stand-alone Performance Test Set: The adjudication method for the ground truth is not explicitly stated.
  • MRMC Clinical Study Test Set: The ground truth was established by "three expert neuroradiologists with access to clinical data and follow up imaging." This suggests a consensus-based approach, but the specific adjudication rules (e.g., majority vote, discussion to agreement) are not detailed (e.g., 2+1, 3+1 rule).

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs without AI assistance

  • Yes, a MRMC cross-over study was conducted.
  • Effect Size:
    • The primary endpoint showed a statistically significant improvement of 0.02 in AUC (from 0.81 to 0.83, p=0.028) when readers were assisted by Brainomix 360 e-ASPECTS compared to unassisted reading.
    • This was driven by an increase in sensitivity (positive percentage agreement) from 66% to 70% and a small improvement in specificity (negative percentage agreement) to 96%.
    • Overall accuracy improved from 93% to 94%.
    • The study also noted that greater magnitude of AUC increases were observed in "lower performers," and the range in AUC between users was narrower with e-ASPECTS, indicating a reduction in variability.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Yes, "Stand-alone Performance Testing" was conducted to comply with special controls for this device type. This tested the algorithm's performance directly against ground truth, independent of human readers.

7. The Type of Ground Truth Used

  • Stand-alone Performance Test Set: The ground truth was based on "available follow up imaging at 24 hours." This indicates outcomes data or established pathology from follow-up scans (e.g., evolution of infarct on follow-up imaging serving as ground truth for early ischemic changes).
  • MRMC Clinical Study Test Set: The ground truth was established by "three expert neuroradiologists with access to clinical data and follow up imaging." This indicates a combination of expert consensus and outcomes/pathology data from follow-up imaging.

8. The Sample Size for the Training Set

  • The document does not explicitly state the sample size for the training set. It mentions that Brainomix 360 e-ASPECTS uses "trained machine learning AI algorithms" and a "random forest machine learning technique," but details about the training data are not provided in this summary.

9. How the Ground Truth for the Training Set Was Established

  • The document does not explicitly state how the ground truth for the training set was established. It only describes the ground truth for the testing/validation sets.

§ 892.2060 Radiological computer-assisted diagnostic software for lesions suspicious of cancer.

(a)
Identification. A radiological computer-assisted diagnostic software for lesions suspicious of cancer is an image processing prescription device intended to aid in the characterization of lesions as suspicious for cancer identified on acquired medical images such as magnetic resonance, mammography, radiography, or computed tomography. The device characterizes lesions based on features or information extracted from the images and provides information about the lesion(s) to the user. Diagnostic and patient management decisions are made by the clinical user.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will improve reader performance as intended.
(iii) Results from performance testing protocols that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain sufficient numbers of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Standalone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; and description of verification and validation activities including system level test protocol, pass/fail criteria, results, and cybersecurity).(2) Labeling must include:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and recommended user training.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations, including situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Detailed instructions for use.
(viii) A detailed summary of the performance testing, including: Test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders (
e.g., lesion and organ characteristics, disease stages, and imaging equipment).