Search Results
Found 2 results
510(k) Data Aggregation
(197 days)
StrokeViewer Perfusion is an image processing software package intended to provide quantitative perfusion information in brain tissue for suspected ischemic stroke patients. It is to be used by medical imaging professionals who analyze dynamic perfusion studies, including but not limited to physicians such as neurologists, and radiologists.
The software is packed as a Docker container allowing installation on a standard "off-the-shelf" computer or a virtual platform. The software can be used to perform image viewing, processing, and analysis of brain CT perfusion images (CTP) images. Data and images are acquired through DICOM compliant imaging devices.
StrokeViewer Perfusion is used for visualization and analysis of dynamic imaging data, showing properties of changes in contrast over time, which are visualized as colored perfusion maps including flow-related parameters and tissue blood volume quantification.
Contraindications:
- Bolus Quality: absent or inadequate bolus.
- Patient Motion: excessive motion leading to artifacts that make the scan technically inadequate. .
- Presence of hemorrhage .
StrokeViewer Perfusion is an image processing application that runs on a standard "off-the-shelf" computer or a virtual platform, and can be used to perform image processing and analysis of CT perfusion images of the brain.
The software can receive, identify and extract DICOM information embedded in the imaging data. The output includes parametric maps related to tissue blood flow (perfusion) and tissue blood volume. Results of the analysis are exported as DICOM series and DICOM reports and can be sent to a preconfigured destination and can be reviewed on a compatible DICOM viewer Perfusion image analysis includes calculation of the following perfusion related parameters:
- . Cerebral Blood Flow (CBF)
- Cerebral Blood Volume (CBV)
- Mean Transit Time (MTT)
- Residue function time-to-peak (TMax) .
- . Arterial Input Function (AIF)
- Volume calculations of affected tissue based on Tmax and CBF abnormalities
The device description and overall fundamental scientific technology of the StrokeViewer Perfusion algorithm is equivalent to the predicate device (K182130) in addition to the listed reference devices within this submission.
Here's a summary of the acceptance criteria and study details for StrokeViewer Perfusion, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The available document does not provide a specific table of quantitative acceptance criteria and corresponding reported device performance values. It broadly states that "The results of algorithm performance testing showed that the device met performance goals and acceptance criteria."
However, it does describe the type of testing performed:
Acceptance Criterion (Implicit) | Reported Device Performance (General) |
---|---|
Correlation with ground truth values on simulated datasets. | "The results of algorithm performance testing showed that the device met performance goals and acceptance criteria." |
Functional performance and meeting design requirements. | "The results of software verification, and algorithmic testing demonstrate that StrokeViewer Perfusion meets all design requirements and specifications for its intended use." |
Software validation (Unit, End-to-end, Reproducibility). | "The tests performed demonstrate that Perfusion in StrokeViewer performs as intended. All testing included within the respective sections of this submission uphold our belief that StrokeViewer is substantially equivalent to the predicate device (K182130)." |
Adherence to risk management and consensus standards. | "All risk controls were identified, implemented, and mitigated according to hazards identified. All testing related to risk controls support the acceptance criteria outlined for our software requirement specifications." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document mentions "simulated datasets (Kudo digital phantom)" for non-clinical bench performance testing. It does not specify the exact number of simulated cases or provide a specific sample size for a "test set" in the context of clinical image data.
- Data Provenance: The testing was conducted using "simulated datasets (Kudo digital phantom) generated using simulating tracer kinetic theory." This implies the data is synthetic/simulated, not derived from real patients. Therefore, information regarding "country of origin" or "retrospective/prospective" is not applicable in this context.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Since the testing was performed on "simulated datasets (Kudo digital phantom)" where "ground truth values were calculated" based on tracer kinetic theory, the ground truth was established by the design of the simulation, not by human experts interpreting medical images. Therefore, no human experts were involved in establishing the ground truth for this specific non-clinical test set.
4. Adjudication Method for the Test Set
Not applicable, as ground truth was established by simulation design/calculation, not human interpretation requiring adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No. The document explicitly states: "Clinical data was not required for this submission of the StrokeViewer Perfusion. We believe the safety and effectiveness of the proposed device was appropriately tested with non-clinical validation."
- Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was conducted.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, the primary performance evaluation described is a standalone algorithmic assessment using simulated data: "Non-clinical bench performance testing was performed to assess Perfusion algorithm performance on simulated datasets (Kudo digital phantom) generated using simulating tracer kinetic theory. Correlations between the output of the StrokeViewer Perfusion device and the ground truth values were calculated."
7. Type of Ground Truth Used
The ground truth used for the non-clinical bench testing was derived from simulated tracer kinetic theory / calculated ground truth values within the Kudo digital phantom.
8. Sample Size for the Training Set
The document does not explicitly state the sample size of the training set used for the StrokeViewer Perfusion algorithm. The focus of this 510(k) summary is on the validation testing rather than the development or training process.
9. How the Ground Truth for the Training Set Was Established
The document does not provide details on how the ground truth for the training set was established. Given the nature of the validation testing using simulated data, it's plausible that similar simulated or engineered data with known ground truths were used for training, but this is not stated.
Ask a specific question about this device
(233 days)
HALO is a notification only cloud-based image processing software artificial intelligence algorithms to analyze patient imaging data in parallel to the standard of care imaging interpretation. Its intended use is to identify suggestive imaging patterns of a pre-specified clinical condition and to directly notify an appropriate medical specialist.
HALO's indication is to facilitate the evaluation of the brain vasculature on patients suspected of stroke by processing and analyzing contrast enhanced CT angiograms of the brain acquired in an acute setting. After completion of the data analysis. HALO sends a notification if a pattern suggestive for a suspected intracranial Large Vessel Occlusion (LVO) of the anterior circulation (ICA, M1 or M2) has been identified in an image.
The intended users of HALO are defined as appropriate medical specialists that are involved in the diagnosis and care of stroke patients at emergency department where stroke patients are administered. They include physicians such as neurologists, and/or other emergency department physicians.
HALO's output should not be used for primary diagnosis or clinical decisions; the final diagnosis is always decided upon by the medical specialist. HALO is indicated for CT scanners from GE Healthcare.
HALO is a notification only, cloud-based clinical support tool which identifies image features and communicates the analysis results to a specialist in parallel to the standard of care workflow.
HALO is designed to process CT angiograms of the brain and facilitate evaluation of these images using artificial intelligence to detect patterns suggestive of an intracranial large vessel occlusion (LVO) of the anterior circulation.
A copy of the original CTA images is sent to HALO cloud servers for automatic image processing. After analyzing the images, HALO sends a notification regarding a suspected finding to a specialist, recommending review of these images. The specialist can review the results remotely in a compatible DICOM web viewer.
Here's a summary of the acceptance criteria and study details for the HALO device, based on the provided FDA 510(k) summary:
HALO Device Acceptance Criteria and Study Details
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Implicit) | Reported Device Performance (HALO) |
---|---|---|
Sensitivity | Sufficiently high for LVO detection (comparable to predicate) | 91.1% (95% CI, 86.0%-94.8%) |
Specificity | Sufficiently high for LVO detection (comparable to predicate) | 87.0% (95% CI, 81.2%-91.5%) |
AUC | High (indicative of good discriminative power) | 0.97 |
Notification Time | Fast enough for acute stroke setting (comparable to predicate) | Median: 4 minutes 31 seconds. Range: 3:47 to 7:12 |
Substantial Equivalence | Equivalent to predicate device ContaCT in terms of indications for use, technological characteristics, and safety and effectiveness. | Concluded to be substantially equivalent. |
2. Sample Size and Data Provenance for Test Set
- Sample Size: 348 CTA scans were initially collected, with 364 patients included for further analysis after exclusion. It's unclear if the "348 CTA scans" and "364 patients" refer to the same dataset or if some patients had multiple scans or if there was an expansion of the dataset. Assuming 364 cases (patients with at least one scan) were used.
- Data Provenance: Retrospective evaluation in a consecutive patient cohort. Data was collected from US comprehensive stroke centers.
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: 3 neuro radiologists.
- Qualifications: "Neuro radiologists" implies specialized training and experience in interpreting neurological imaging, which is appropriate for stroke diagnosis. Specific years of experience are not mentioned.
4. Adjudication Method for Test Set
The adjudication method is not explicitly stated. It says "Ground truth was established by an expert panel consisting of 3 neuro radiologists," which suggests a consensus-based approach, but the specific rule (e.g., majority vote, unanimous agreement, review by a lead expert if disagreement) is not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly mentioned or conducted as detailed in the summary. The study focused on the standalone performance of the HALO algorithm.
6. Standalone Performance (Algorithm Only)
Yes, a standalone study was performed. The clinical study retrospectively evaluated the performance of the HALO clinical decision support algorithm for LVO detection using the collected CTA scans. The reported sensitivity, specificity, AUC, and notification time are all measures of the algorithm's standalone performance.
7. Type of Ground Truth Used
Expert Consensus. The ground truth for the test set was established by an expert panel consisting of 3 neuro radiologists.
8. Sample Size for Training Set
The sample size for the training set is not explicitly mentioned in the provided document. The document only covers the evaluation of the algorithm.
9. How Ground Truth for Training Set was Established
How the ground truth was established for the training set is not explicitly mentioned in the provided document. ("database of images" is stated for the core algorithm, but not how ground truth was applied to them).
Ask a specific question about this device
Page 1 of 1