Search Results
Found 2 results
510(k) Data Aggregation
(119 days)
Brainomix 360 e-ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data.
The software automatically registers images and uses an atlas to segment and analyze ASPECTS regions. Brainomix 360 e-ASPECTS extracts image data from individual voxels in the image to provide analysis and computer analytics and relates the analysis to the atlas defined ASPECTS regions. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECTS (Alberta Stroke Program Early CT) score.
Brainomix 360 e-ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. Brainomix 360 e-ASPECTS provides information that may be useful in the characterization of ischemic brain tissue injury during image interpretation (within 24 hours from time last known well).
Brainomix 360 e-ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment by providing highlighted ASPECTS regions and an automated editable ASPECTS score for clinician review. Brainomix 360 e-ASPECTS additionally provides a visualization of the voxels contributing to and excluded from the automated ASPECTS score, and a calculation of the voxel volume contributing to ASPECTS score.
Limitations:
- Brainomix 360 e-ASPECTS is not intended for primary interpretation of CT images. It is used to assist physician evaluation.
- The Brainomix 360 e-ASPECTS score should be only used for ischemic stroke patients following the standard of care.
- Brainomix 360 e-ASPECTS has only been validated and is intended to be used in patient populations aged over 21 years.
- Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Images viewed on a mobile platform are compressed preview images and not for diagnostic interpretation.
- Brainomix 360 e-ASPECTS has been validated and is intended to be used on Siemens Somatom Definition scanners.
Contraindications/ Exclusions/Cautions:
· Patient motion: Excessive patient motion leading to artifacts that make the scan technically inadequate.
· Hemorrhagic Transformation, Hematoma.
Brainomix 360 e-ASPECTS (also referred to as e-ASPECTS in this submission) is a medical image visualization and processing software package compliant with the DICOM standard and running on an off-the-shelf physical or virtual server.
Brainomix 360 e-ASPECTS allows for the visualization, analysis and post-processing of DICOM compliant Non-contrast CT (NCCT) images which, when interpreted by a trained physician or medical technician, may yield information useful in clinical decision making.
Brainomix 360 e-ASPECTS is a stand-alone software device which uses machine learning algorithms to automatically process NCCT brain image data to provide an output ASPECTS score based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines.
The post-processing image results and ASPECTS score are identified based on regional imaging features and overlayed onto brain scan images. e-ASPECTS provides an automatic ASPECTS score based on the input CT data for the physician. The score includes which ASPECTS regions are identified based on regional imaging features derived from NCCT brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on the clinician's judgment.
Brainomix 360 e-ASPECTS can connect with other DICOM-compliant devices, for example to transfer NCCT scans from a Picture Archiving and Communication System (PACS) to Brainomix 360 e-ASPECTS software for processing.
Results and images can be sent to a PACS via DICOM transfer and can be viewed on a PACS workstation or via a web user interface on any machine contained and accessed within a hospital network and firewall and with a connection to the Brainomix 360 e-ASPECTS software (e.g. a LAN connection).
Brainomix 360 e-ASPECTS notification capabilities enable clinicians to preview images through a mobile application or via e-mail.
Brainomix 360 e-ASPECTS email notification capabilities enable clinicians to preview images via e-mail notification with result image attachments. Images that are previewed via e-mail are compressed, are for informational purposes only, and not intended for diagnostic use beyond notification.
Brainomix 360 e-ASPECTS is not intended for mobile diagnostic use. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer and engaging in appropriate patient evaluation and relevant discussion with a treating physician before making care-related decisions or requests.
Brainomix 360 e-ASPECTS provides an automated workflow which will automatically process image data received by the system in accordance with pre-configured user DICOM routing preferences.
Once received, image processing is automatically applied. Once any image processing has been completed, notifications are sent to pre-configured users to inform that the image processing results are ready. Users can then access and review the results and images via the web user interface case viewer or PACS viewer.
The core of e-ASPECTS algorithm (excluding image loading or result output format) can be summarised in the following 3 key steps of the processing pipeline:
- Pre-processing: brain extraction from the three dimensional (3D) non-enhanced contrast CT head dataset and its reorientation/normalization by 3D spatial registration to a standard template space.
- Delineation of the 20 (10 for each cerebral hemisphere) pre-defined ASPECTS regions of interest on the normalized 3D image.
- Image feature extraction and heatmap generation which consists of the computation of numerical values characterizing brain tissue, apply a trained predictive model to those features and generate a 3D heatmap from the models output for highlighting regions contributing towards the ASPECTS score.
The Brainomix 360 e-ASPECTS module is made available to the user through the Brainomix 360 platform. The Brainomix 360 platform is a central control unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:
Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:
Brainomix 360 e-ASPECTS Device Performance Study
The Brainomix 360 e-ASPECTS device underwent performance testing to demonstrate its accuracy and effectiveness. This included both standalone algorithm performance and a multi-reader multi-case (MRMC) study to assess the impact of AI assistance on human readers.
1. Acceptance Criteria and Reported Device Performance
Digital Phantom Validation (for "volume contributing to e-ASPECTS")
Metric Name | Acceptance Criteria | Reported Performance | Pass/Fail |
---|---|---|---|
Absolute Bias (upper 95% CI) | 0.86 | 0.993 | Pass |
Standalone Performance Testing (for ASPECTS score accuracy)
Metric Name | Acceptance Criteria (Implied by positive results) | Reported Performance (Model only) | Outcome |
---|---|---|---|
AUC | High diagnostic accuracy | 83% (95% CI: 80-86%) | Good |
Sensitivity | Good detection of affected regions | 69% (56-75%) | Good |
Specificity | Good identification of unaffected regions | 97% (80-97%) | Good |
Multi-Reader Multi-Case (MRMC) Study (Human + AI vs. Human only for ASPECTS score accuracy)
Metric Name | Acceptance Criteria (Implied by statistical significance) | Reported Performance (Human only) | Reported Performance (Human + AI assistance) | Effect Size (Improvement) | Statistical Significance |
---|---|---|---|---|---|
AUC | Improvement in AUC with AI assistance | 78% | 85% | 6.4% | p=.03 (statistically significant) |
Sensitivity | Improvement in Sensitivity with AI assistance | 61% | 72% | 11% | Not explicitly stated as statistically significant, but driving AUC improvement |
Specificity | Improvement in Specificity with AI assistance | 96% | 98% | 2% | Not explicitly stated as statistically significant, but contributing to AUC improvement |
Cohen's Kappa | Improvement with AI assistance | Not explicitly stated | Improved significantly | - | Significantly improved |
Weighted Kappa | Improvement with AI assistance | Not explicitly stated | Improved significantly | - | Significantly improved |
2. Sample Sizes and Data Provenance
- Digital Phantom Validation Test Set: n=110 synthetic datasets
- Standalone Performance Test Set: n=137 non-contrast CT scans
- Data Provenance: From 3 different USA institutions (Siemens, GE, Philips, and Toshiba scanners).
- Retrospective/Prospective: The data appears to be retrospective based on the description of patient admission dates (March 2012 and August 2023) and clinical context.
- MRMC Study Test Set: n=140 NCCT scans
- Data Provenance: Cases collected from various clinical sites (specific countries not explicitly stated, but the mention of US neuroradiologists for ground truth suggests US data). Scanners included Siemens, GE, Philips, and Toshiba.
- Retrospective/Prospective: The study used "retrospective data" (explicitly stated on page 12).
- Training Set Sample Size: The document does not specify the sample size for the training set. It mentions the algorithm is based on "machine learning" and a "trained predictive model" but provides no details on the training data.
3. Number of Experts and Qualifications for Ground Truth Establishment
- Standalone Performance Test Set: Three board-certified US neuroradiologists. No information on years of experience is provided.
- MRMC Study Test Set: Three board-certified US neuroradiologists for establishing the ground truth that human readers were compared against. No information on years of experience is provided.
4. Adjudication Method for the Test Set(s) Ground Truth
- Standalone Performance Test Set: "Consensus of three board-certified US neuroradiologists." This implies that the ground truth was established by agreement among the three experts. The specific method (e.g., 2-out-of-3, or discussion to reach full consensus) is not detailed, but "consensus" suggests agreement.
- MRMC Study Test Set: "Consensus of three board-certified US neuroradiologists." Similar to the standalone study, ground truth was established by consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: Yes, an MRMC study was conducted.
- Effect Size: The study showed a 6.4% improvement in AUC for readers with e-ASPECTS support (85%) compared to without e-ASPECTS support (78%). This improvement was statistically significant (p=.03). There was also an improvement in sensitivity (from 61% to 72%) and a small improvement in specificity (from 96% to 98%). Cohen's Kappa and weighted Kappa also improved significantly.
- Readers: 7 clinical readers (1 "expert" neuroradiologist and 6 "non-expert" radiologists or neurologists).
6. Standalone Performance (Algorithm Only)
- Was it done?: Yes, a standalone performance testing was conducted.
- Performance Metrics: The algorithm achieved an AUC of 83% (95% CI: 80-86%), with a sensitivity of 69% (56-75%) and a specificity of 97% (80-97%) on a case-level as compared to expert consensus. Area under the curve (AUC) specifically refers to overall region-level performance.
7. Type of Ground Truth Used
- Digital Phantom Validation: Synthetic volumes/known phantom volumes.
- Standalone Performance Testing: Expert consensus (of three board-certified US neuroradiologists).
- MRMC Study: Expert consensus (of three board-certified US neuroradiologists).
8. Sample Size for the Training Set
The document does not provide a specific sample size for the training set. It only states that the device uses "machine learning algorithms" and a "trained predictive model."
9. How Ground Truth for Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only refers to a "trained predictive model."
Ask a specific question about this device
(172 days)
Rapid is an image processing software package to be used by trained professionals, including but not limited to physicians (medical analysis and decision making) and medical technicians (administrative case processing). The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images. Data and images are acquired through DICOM compliant imaging devices. Rapid is indicated for use in Adults only.
Rapid provides both viewing and analysis capabilities for functional and dynamic imaging datasets acquired with CT, CT Perfusion (CTP), CT Angiography (CTA), C-arm CT Perfusion and MRI including a Diffusion Weighted MRI (DWI) Module and a Dynamic Analysis Module (dynamic contrast-enhanced imaging data for MRI, CT, and C-arm CT).
Rapid C-arm CT Perfusion can be used to qualitatively assess cerebral hemodynamics in the angiography suite.
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue.
The DWI Module is used to visualize local water diffusion properties from the analysis of diffusion - weighted MRI data.
The Dynamic Analysis Module is used for visualization and analysis of dynamic imaging data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to tissue flow (perfusion) and tissue blood volume.
Rapid CT Perfusion and Rapid MR Perfusion can be used by physicians to aid in the selection of acute stroke patients (with known occlusion of the intracranial internal carotid artery or proximal middle cerebral artery). Instructions for the use of contrast agents for this indication can be found in Appendix A of the User's Manual. Additional information for safe and effective drug use is available in the product-specific iodinated CT and gadolinium-based MR contrast drug labeling.
In addition to the Rapid imaging criteria, patients must meet the clinical requirements for thrombectomy, as assessed by the physician, and have none of the following contraindications or exclusions:
· Bolus Quality: absent or inadequate bolus.
· Patient Motion: excessive motion leading to artifacts that make the scan technically inadequate.
· Presence of hemorrhage.
· C-Arm CTP is not to be used in the Rapid Thrombectomy indication criteria, other modalities should be consulted.
Cautions:
· C-Arm CTP provides qualitative data only, review other modalities prior to diagnosis. CBV and CBT are not absolute and CBT, CBV, MTT and Tmax are supported for qualitative interpretation of the perfusion maps only.
Rapid is a software package that provides for the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography) and MRI (Magnetic Image Resonance), as an aid to physician diagnosis.
Rapid can be installed on a customer's Server or it can be accessed online as a virtual system. It provides viewing, quantification, analysis and reporting capabilities.
Rapid works with the following types of (DICOM compliant) medical image data:
- CT (Computed Tomography)
- MRI(Magnetic Image Resonance) ●
Rapid acquires (DICOM compliant) medical image data from the following sources:
- . DICOM file
- DICOM CD-R ●
- Network using DICOM protocol. ●
Rapid provides tools for performing the following types of analysis:
- selection of acute stroke patients for endovascular thrombectomy ●
- volumetry of thresholded maps
- time intensity plots for dynamic time courses
- measurement of mismatch between labeled volumes on co-registered image ● volumes
- large vessel density. ●
Rapid is a Software as a Medical Device (SaMD) consisting of one or more Rapid Servers (dedicated or virtual). The Rapid Server is an image processing engine that connects to a hospital LAN, or inside the Hospital Firewall. It can be a dedicated Rapid Server or a VM Rapid appliance, which is a virtualized Rapid Server that runs on a dedicated server.
Rapid is designed to streamline medical image processing tasks that are time consuming and fatiguing in routine patient workup. Once Rapid is installed it operates with minimal user interaction. Once the CT [NCCT, CT, CTA, C-arm CT(CBCT)] or MR (MR, MRA) data are acquired, the CT or MRI console operator selects Rapid as the target for the DICOM images, and then the operator selects which study/series data to be sent to Rapid. Based on the type of incoming DICOM data, Rapid will identify the data set scanning modality and determine the suitable processing module. The Rapid Platform is a central unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:
- Rapid CTP/MRP/C-arm CTP, DWI, Dynamic Analysis (Original: K121447, Updated ● with K172477, K182130, K213165, K233512 and K233582)
- Rapid CTA (K172477) ●
- Rapid ASPECTS (K200760, K232156)
- Rapid ICH (K193087, K221456)
- Rapid LVO (K200941, K221248)
- Rapid NCCT Stroke (K222884)
- . Rapid RV/LV (K223396)
- Rapid PETN (K220499)
- Rapid ANRTN (K230074) ●
- Rapid SDH (K232436) ●
The iSchemaView Server is a dedicated server that provides a central repository for Rapid data. All iSchemaView Server data is stored on encrypted hard disks. It also provides a user interface for accessing Rapid data. It connects to a firewalled Data Center Network and has its own firewall for additional cyber/data security. The iSchemaView Server connects to one or more Rapid Servers via WAN. Available types of connection include VPN (Virtual Private Network - RFC2401 and RFC4301 Standards) Tunnel and SSH (Secure Shell).
The provided text describes the iSchemaView Rapid device, an image processing software package. The document focuses on its 510(k) submission (K233582) and demonstrates its substantial equivalence to a previously cleared predicate device (K213165). The new submission primarily extends the device's functionality to include C-arm CT for qualitative cerebral hemodynamics assessment and qualitative analysis of perfusion parameters.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a "table of acceptance criteria" with corresponding "reported device performance" in the format typically used for performance studies with specific metrics and thresholds (e.g., sensitivity, specificity, accuracy). Instead, it states that the device was validated to provide "accurate representation of key processing parameters" and "met all design requirements and specifications."
The key performance claims and their validation are described qualitatively:
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Accurate representation of key processing parameters for perfusion imaging (conventional CT and C-arm CT) | "The performance validation testing demonstrated that the Rapid system provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations associated with the intended use of the software." (Page 8) "Phantom validation results between conventional CT and C-arm CT scanners for the perfusion indication of Rapid Core are comparable with small biases in MTT (mean transit time) and Tmax (time to the maximum of the residue function) which were expected due to the temporal resolution difference in conventional and C-arm CT scanners." (Page 9) |
Meet all design requirements and specifications | "Software performance, validation and verification testing demonstrated that the Rapid system met all design requirements and specifications." (Page 8) |
2. Sample size used for the test set and the data provenance
The document states that iSchemaView conducted "extensive phantom validation testing" and "software verification and validation testing of the Rapid system" using "the use of phantoms and case data." However, it does not specify the sample size for the test set (number of phantoms or cases).
The data provenance is stated as:
- Phantoms: Used for characterizing perfusion imaging performance.
- Case Data: Used for validating the Rapid System performance.
The document does not explicitly mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth for the test set or their specific qualifications. It mentions that the device is "to be used by trained professionals, including but not limited to physicians (medical analysis and decision making) and medical technicians (administrative case processing)" and that "Rapid C-arm CT Perfusion can be used to qualitatively assess cerebral hemodynamics in the angiography suite." While this indicates the intended users, it does not explicitly detail the experts involved in establishing ground truth for the validation studies.
4. Adjudication method for the test set
The document does not mention any adjudication method (e.g., 2+1, 3+1) used for establishing ground truth in the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The current submission focuses on demonstrating substantial equivalence and the performance of the device itself (including its new feature for C-arm CT) rather than its direct comparative effectiveness with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The provided text only discusses "extensive phantom validation testing" and "software verification and validation testing." The results presented ("accurate representation of key processing parameters," "met all design requirements and specifications," and "comparable with small biases") appear to be from an algorithm-only (standalone) performance assessment, particularly for the software's ability to process and represent data from phantoms and cases, and the comparability of C-arm CT processing to conventional CT. There is no mention of human-in-the-loop performance in the context of these validation studies.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "phantom validation testing," the ground truth would inherently be known physical and temporal parameters designed into the phantoms.
For the "case data," the document does not explicitly state the type of ground truth. Given the nature of a software processing and analysis system, it likely relies on a combination of:
- Established interpretations from other modalities or clinical diagnoses, particularly for "selecting acute stroke patients."
- Quantitative measurements derived from advanced imaging, which the software aims to replicate or analyze.
8. The sample size for the training set
The document does not specify the sample size for the training set. It details the device's functionality and validation rather than its development or machine learning training specifics.
9. How the ground truth for the training set was established
Since the document does not mention the sample size for the training set, it also does not describe how the ground truth for the training set was established. The focus is on the validation of the developed software, which includes algorithms, some of which may be AI/ML-based as indicated by "Mixed Traditional and AI/ML" under Software in Table 1 (page 10). However, the specifics of ML model training, including data and ground truth establishment, are not detailed in this summary.
Ask a specific question about this device
Page 1 of 1