(119 days)
Brainomix 360 e-ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.
The software automatically registers images and uses an atlas to segment and analyze ASPECTS regions. Brainomix 360 e-ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECTS (Alberta Stroke Program Early CT) score.
Brainomix 360 e-ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. Brainomix 360 e-ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 24 hours from time last known well).
Brainomix 360 e-ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. Brainomix 360 e-ASPECTS additionally provides a visualization of the voxels contributing to and excluded from the automated ASPECTS score, and a calculation of the voxel volume contributing to ASPECTS score.
Limitations:
- Brainomix 360 e-ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
- The Brainomix 360 e-ASPECTS score should be only used for ischemic stroke patients following the standard of care.
- Brainomix 360 e-ASPECTS has only been validated and is intended to be used in patient populations aged over 21 years.
- Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Images viewed on a mobile platform are compressed preview images and not for diagnostic interpretation.
- Brainomix 360 e-ASPECTS has been validated and is intended to be used on Siemens Somatom Definition scanners.
Contraindications/ Exclusions/Cautions:
· Patient motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
· Hemorrhagic Transformation, Hematoma.
Brainomix 360 e-ASPECTS (also referred to as e-ASPECTS in this submission) is a medical image visualization and processing software package compliant with the DICOM standard and running on an off-the-shelf physical or virtual server.
Brainomix 360 e-ASPECTS allows for the visualization, analysis and post-processing of DICOM compliant Non-contrast CT (NCCT) images which, when interpreted by a trained physician or medical technician, may yield information useful in clinical decision making.
Brainomix 360 e-ASPECTS is a stand-alone software device which uses machine learning algorithms to automatically process NCCT brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.
The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. e-ASPECTS provides an automatic ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from NCCT brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgment.
Brainomix 360 e-ASPECTS can connect with other DICOM-compliant devices, for example to transfer NCCT scans from a Picture Archiving and Communication System (PACS) to Brainomix 360 e-ASPECTS software for processing.
Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via a web user interface on any machine contained and accessed within a hospital network and firewall and with a connection to the Brainomix 360 e-ASPECTS software (e.g. a LAN connection).
Brainomix 360 e-ASPECTS notification capabilities enable clinicians to preview images through a mobile application or via e-mail.
Brainomix 360 e-ASPECTS email notification capabilities enable clinicians to preview images via e-mail notification with result image attachments. Images that are previewed via e-mail are compressed, are for informational purposes only, and not intended for diagnostic use beyond notification.
Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests.
Brainomix 360 e-ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.
Once received, image processing is automatically applied. Once any image processing has been completed, notifications are sent to pre-configured users to inform that the image processing results are ready. Users can then access and review the results and images via the web user interface case viewer or PACS viewer.
The core of e-ASPECTS algorithm (excluding image loading or result output format) can be summarised in the following 3 key steps of the processing pipeline:
- Pre-processing: brain extraction from the three dimensional (3D) non-enhanced contrast CT head dataset and its reorientation/normalization by 3D spatial registration to a standard template space.
- Delineation of the 20 (10 for each cerebral hemisphere) pre-defined ASPECTS regions of interest on the normalized 3D image.
- Image feature extraction and heatmap generation which consists of the computation of numerical values characterizing brain tissue, apply a trained predictive model to those features and generate a 3D heatmap from the models output for highlighting regions contributing towards the ASPECTS score.
The Brainomix 360 e-ASPECTS module is made available to the user through the Brainomix 360 platform. The Brainomix 360 platform is a central control unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:
Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:
Brainomix 360 e-ASPECTS Device Performance Study
The Brainomix 360 e-ASPECTS device underwent performance testing to demonstrate its accuracy and effectiveness. This included both standalone algorithm performance and a multi-reader multi-case (MRMC) study to assess the impact of AI assistance on human readers.
1. Acceptance Criteria and Reported Device Performance
Digital Phantom Validation (for "volume contributing to e-ASPECTS")
Metric Name | Acceptance Criteria | Reported Performance | Pass/Fail |
---|---|---|---|
Absolute Bias (upper 95% CI) | 0.86 | 0.993 | Pass |
Standalone Performance Testing (for ASPECTS score accuracy)
Metric Name | Acceptance Criteria (Implied by positive results) | Reported Performance (Model only) | Outcome |
---|---|---|---|
AUC | High diagnostic accuracy | 83% (95% CI: 80-86%) | Good |
Sensitivity | Good detection of affected regions | 69% (56-75%) | Good |
Specificity | Good identification of unaffected regions | 97% (80-97%) | Good |
Multi-Reader Multi-Case (MRMC) Study (Human + AI vs. Human only for ASPECTS score accuracy)
Metric Name | Acceptance Criteria (Implied by statistical significance) | Reported Performance (Human only) | Reported Performance (Human + AI assistance) | Effect Size (Improvement) | Statistical Significance |
---|---|---|---|---|---|
AUC | Improvement in AUC with AI assistance | 78% | 85% | 6.4% | p=.03 (statistically significant) |
Sensitivity | Improvement in Sensitivity with AI assistance | 61% | 72% | 11% | Not explicitly stated as statistically significant, but driving AUC improvement |
Specificity | Improvement in Specificity with AI assistance | 96% | 98% | 2% | Not explicitly stated as statistically significant, but contributing to AUC improvement |
Cohen's Kappa | Improvement with AI assistance | Not explicitly stated | Improved significantly | - | Significantly improved |
Weighted Kappa | Improvement with AI assistance | Not explicitly stated | Improved significantly | - | Significantly improved |
2. Sample Sizes and Data Provenance
- Digital Phantom Validation Test Set: n=110 synthetic datasets
- Standalone Performance Test Set: n=137 non-contrast CT scans
- Data Provenance: From 3 different USA institutions (Siemens, GE, Philips, and Toshiba scanners).
- Retrospective/Prospective: The data appears to be retrospective based on the description of patient admission dates (March 2012 and August 2023) and clinical context.
- MRMC Study Test Set: n=140 NCCT scans
- Data Provenance: Cases collected from various clinical sites (specific countries not explicitly stated, but the mention of US neuroradiologists for ground truth suggests US data). Scanners included Siemens, GE, Philips, and Toshiba.
- Retrospective/Prospective: The study used "retrospective data" (explicitly stated on page 12).
- Training Set Sample Size: The document does not specify the sample size for the training set. It mentions the algorithm is based on "machine learning" and a "trained predictive model" but provides no details on the training data.
3. Number of Experts and Qualifications for Ground Truth Establishment
- Standalone Performance Test Set: Three board-certified US neuroradiologists. No information on years of experience is provided.
- MRMC Study Test Set: Three board-certified US neuroradiologists for establishing the ground truth that human readers were compared against. No information on years of experience is provided.
4. Adjudication Method for the Test Set(s) Ground Truth
- Standalone Performance Test Set: "Consensus of three board-certified US neuroradiologists." This implies that the ground truth was established by agreement among the three experts. The specific method (e.g., 2-out-of-3, or discussion to reach full consensus) is not detailed, but "consensus" suggests agreement.
- MRMC Study Test Set: "Consensus of three board-certified US neuroradiologists." Similar to the standalone study, ground truth was established by consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: Yes, an MRMC study was conducted.
- Effect Size: The study showed a 6.4% improvement in AUC for readers with e-ASPECTS support (85%) compared to without e-ASPECTS support (78%). This improvement was statistically significant (p=.03). There was also an improvement in sensitivity (from 61% to 72%) and a small improvement in specificity (from 96% to 98%). Cohen's Kappa and weighted Kappa also improved significantly.
- Readers: 7 clinical readers (1 "expert" neuroradiologist and 6 "non-expert" radiologists or neurologists).
6. Standalone Performance (Algorithm Only)
- Was it done?: Yes, a standalone performance testing was conducted.
- Performance Metrics: The algorithm achieved an AUC of 83% (95% CI: 80-86%), with a sensitivity of 69% (56-75%) and a specificity of 97% (80-97%) on a case-level as compared to expert consensus. Area under the curve (AUC) specifically refers to overall region-level performance.
7. Type of Ground Truth Used
- Digital Phantom Validation: Synthetic volumes/known phantom volumes.
- Standalone Performance Testing: Expert consensus (of three board-certified US neuroradiologists).
- MRMC Study: Expert consensus (of three board-certified US neuroradiologists).
8. Sample Size for the Training Set
The document does not provide a specific sample size for the training set. It only states that the device uses "machine learning algorithms" and a "trained predictive model."
9. How Ground Truth for Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only refers to a "trained predictive model."
§ 892.2060 Radiological computer-assisted diagnostic software for lesions suspicious of cancer.
(a)
Identification. A radiological computer-assisted diagnostic software for lesions suspicious of cancer is an image processing prescription device intended to aid in the characterization of lesions as suspicious for cancer identified on acquired medical images such as magnetic resonance, mammography, radiography, or computed tomography. The device characterizes lesions based on features or information extracted from the images and provides information about the lesion(s) to the user. Diagnostic and patient management decisions are made by the clinical user.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will improve reader performance as intended.
(iii) Results from performance testing protocols that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain sufficient numbers of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Standalone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; and description of verification and validation activities including system level test protocol, pass/fail criteria, results, and cybersecurity).(2) Labeling must include:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and recommended user training.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations, including situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Detailed instructions for use.
(viii) A detailed summary of the performance testing, including: Test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders (
e.g., lesion and organ characteristics, disease stages, and imaging equipment).