Search Results
Found 6 results
510(k) Data Aggregation
(172 days)
Rapid is an image processing software package to be used by trained professionals, including but not limited to physicians (medical analysis and decision making) and medical technicians (administrative case processing). The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images. Data and images are acquired through DICOM compliant imaging devices. Rapid is indicated for use in Adults only.
Rapid provides both viewing and analysis capabilities for functional and dynamic imaging datasets acquired with CT, CT Perfusion (CTP), CT Angiography (CTA), C-arm CT Perfusion and MRI including a Diffusion Weighted MRI (DWI) Module and a Dynamic Analysis Module (dynamic contrast-enhanced imaging data for MRI, CT, and C-arm CT).
Rapid C-arm CT Perfusion can be used to qualitatively assess cerebral hemodynamics in the angiography suite.
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue.
The DWI Module is used to visualize local water diffusion properties from the analysis of diffusion - weighted MRI data.
The Dynamic Analysis Module is used for visualization and analysis of dynamic imaging data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to tissue flow (perfusion) and tissue blood volume.
Rapid CT Perfusion and Rapid MR Perfusion can be used by physicians to aid in the selection of acute stroke patients (with known occlusion of the intracranial internal carotid artery or proximal middle cerebral artery). Instructions for the use of contrast agents for this indication can be found in Appendix A of the User's Manual. Additional information for safe and effective drug use is available in the product-specific iodinated CT and gadolinium-based MR contrast drug labeling.
In addition to the Rapid imaging criteria, patients must meet the clinical requirements for thrombectomy, as assessed by the physician, and have none of the following contraindications or exclusions:
· Bolus Quality: absent or inadequate bolus.
· Patient Motion: excessive motion leading to artifacts that make the scan technically inadequate.
· Presence of hemorrhage.
· C-Arm CTP is not to be used in the Rapid Thrombectomy indication criteria, other modalities should be consulted.
Cautions:
· C-Arm CTP provides qualitative data only, review other modalities prior to diagnosis. CBV and CBT are not absolute and CBT, CBV, MTT and Tmax are supported for qualitative interpretation of the perfusion maps only.
Rapid is a software package that provides for the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography) and MRI (Magnetic Image Resonance), as an aid to physician diagnosis.
Rapid can be installed on a customer's Server or it can be accessed online as a virtual system. It provides viewing, quantification, analysis and reporting capabilities.
Rapid works with the following types of (DICOM compliant) medical image data:
- CT (Computed Tomography)
- MRI(Magnetic Image Resonance) ●
Rapid acquires (DICOM compliant) medical image data from the following sources:
- . DICOM file
- DICOM CD-R ●
- Network using DICOM protocol. ●
Rapid provides tools for performing the following types of analysis:
- selection of acute stroke patients for endovascular thrombectomy ●
- volumetry of thresholded maps
- time intensity plots for dynamic time courses
- measurement of mismatch between labeled volumes on co-registered image ● volumes
- large vessel density. ●
Rapid is a Software as a Medical Device (SaMD) consisting of one or more Rapid Servers (dedicated or virtual). The Rapid Server is an image processing engine that connects to a hospital LAN, or inside the Hospital Firewall. It can be a dedicated Rapid Server or a VM Rapid appliance, which is a virtualized Rapid Server that runs on a dedicated server.
Rapid is designed to streamline medical image processing tasks that are time consuming and fatiguing in routine patient workup. Once Rapid is installed it operates with minimal user interaction. Once the CT [NCCT, CT, CTA, C-arm CT(CBCT)] or MR (MR, MRA) data are acquired, the CT or MRI console operator selects Rapid as the target for the DICOM images, and then the operator selects which study/series data to be sent to Rapid. Based on the type of incoming DICOM data, Rapid will identify the data set scanning modality and determine the suitable processing module. The Rapid Platform is a central unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:
- Rapid CTP/MRP/C-arm CTP, DWI, Dynamic Analysis (Original: K121447, Updated ● with K172477, K182130, K213165, K233512 and K233582)
- Rapid CTA (K172477) ●
- Rapid ASPECTS (K200760, K232156)
- Rapid ICH (K193087, K221456)
- Rapid LVO (K200941, K221248)
- Rapid NCCT Stroke (K222884)
- . Rapid RV/LV (K223396)
- Rapid PETN (K220499)
- Rapid ANRTN (K230074) ●
- Rapid SDH (K232436) ●
The iSchemaView Server is a dedicated server that provides a central repository for Rapid data. All iSchemaView Server data is stored on encrypted hard disks. It also provides a user interface for accessing Rapid data. It connects to a firewalled Data Center Network and has its own firewall for additional cyber/data security. The iSchemaView Server connects to one or more Rapid Servers via WAN. Available types of connection include VPN (Virtual Private Network - RFC2401 and RFC4301 Standards) Tunnel and SSH (Secure Shell).
The provided text describes the iSchemaView Rapid device, an image processing software package. The document focuses on its 510(k) submission (K233582) and demonstrates its substantial equivalence to a previously cleared predicate device (K213165). The new submission primarily extends the device's functionality to include C-arm CT for qualitative cerebral hemodynamics assessment and qualitative analysis of perfusion parameters.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a "table of acceptance criteria" with corresponding "reported device performance" in the format typically used for performance studies with specific metrics and thresholds (e.g., sensitivity, specificity, accuracy). Instead, it states that the device was validated to provide "accurate representation of key processing parameters" and "met all design requirements and specifications."
The key performance claims and their validation are described qualitatively:
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Accurate representation of key processing parameters for perfusion imaging (conventional CT and C-arm CT) | "The performance validation testing demonstrated that the Rapid system provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations associated with the intended use of the software." (Page 8) "Phantom validation results between conventional CT and C-arm CT scanners for the perfusion indication of Rapid Core are comparable with small biases in MTT (mean transit time) and Tmax (time to the maximum of the residue function) which were expected due to the temporal resolution difference in conventional and C-arm CT scanners." (Page 9) |
Meet all design requirements and specifications | "Software performance, validation and verification testing demonstrated that the Rapid system met all design requirements and specifications." (Page 8) |
2. Sample size used for the test set and the data provenance
The document states that iSchemaView conducted "extensive phantom validation testing" and "software verification and validation testing of the Rapid system" using "the use of phantoms and case data." However, it does not specify the sample size for the test set (number of phantoms or cases).
The data provenance is stated as:
- Phantoms: Used for characterizing perfusion imaging performance.
- Case Data: Used for validating the Rapid System performance.
The document does not explicitly mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth for the test set or their specific qualifications. It mentions that the device is "to be used by trained professionals, including but not limited to physicians (medical analysis and decision making) and medical technicians (administrative case processing)" and that "Rapid C-arm CT Perfusion can be used to qualitatively assess cerebral hemodynamics in the angiography suite." While this indicates the intended users, it does not explicitly detail the experts involved in establishing ground truth for the validation studies.
4. Adjudication method for the test set
The document does not mention any adjudication method (e.g., 2+1, 3+1) used for establishing ground truth in the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The current submission focuses on demonstrating substantial equivalence and the performance of the device itself (including its new feature for C-arm CT) rather than its direct comparative effectiveness with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The provided text only discusses "extensive phantom validation testing" and "software verification and validation testing." The results presented ("accurate representation of key processing parameters," "met all design requirements and specifications," and "comparable with small biases") appear to be from an algorithm-only (standalone) performance assessment, particularly for the software's ability to process and represent data from phantoms and cases, and the comparability of C-arm CT processing to conventional CT. There is no mention of human-in-the-loop performance in the context of these validation studies.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "phantom validation testing," the ground truth would inherently be known physical and temporal parameters designed into the phantoms.
For the "case data," the document does not explicitly state the type of ground truth. Given the nature of a software processing and analysis system, it likely relies on a combination of:
- Established interpretations from other modalities or clinical diagnoses, particularly for "selecting acute stroke patients."
- Quantitative measurements derived from advanced imaging, which the software aims to replicate or analyze.
8. The sample size for the training set
The document does not specify the sample size for the training set. It details the device's functionality and validation rather than its development or machine learning training specifics.
9. How the ground truth for the training set was established
Since the document does not mention the sample size for the training set, it also does not describe how the ground truth for the training set was established. The focus is on the validation of the developed software, which includes algorithms, some of which may be AI/ML-based as indicated by "Mixed Traditional and AI/ML" under Software in Table 1 (page 10). However, the specifics of ML model training, including data and ground truth establishment, are not detailed in this summary.
Ask a specific question about this device
(76 days)
Rapid is an image processing software package to be used by trained professionals,including but not limited to physicians and medical technicians. The software runs ona standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing and analysis of images. Data and images are acquired through DICOM compliant imaging devices.
Rapid provides both viewing and analysis capabilities for functional and dynamic imaging datasets acquired with CT Perfusion (CTP). CT Angiography (CTA), and MRI including a Diffusion Weighted MRI (DWI) Module and a Dynamic Analysis Module (dynamic contrast-enhanced imaging data for MRI and CT).
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue.
The DWI Module is used to visualize local water diffusion properties from the analysis of diffusion weighted MRI data.
The Dynamic Analysis Module is used for visualization and analysis of dynamic imaging data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to tissue flow (perfusion) and tissue blood volume.
Rapid CT-Perfusion and Rapid MR-Perfusion can be used by physicians to aid in the selection of acute stroke patients (with known occlusion of the intracranial internal carotid artery or proximal middle cerebral artery)Instructions for the use of contrast agents for this indication can be found in Appendix A of the User's Manual. Additional information for safe and effective drug use is available inthe product-specific iodinated CT and gadolinium-based MR contrast drug labeling.
In addition to the Rapid imaging criteria, patients must meet the clinical requirements for thrombectomy, as assessed by the physician, and have none of the following contraindications or exclusions:
- · Bolus Quality: absent or inadequate bolus.
- · Patient Motion: excessive motion leading to artifacts that make the scan technically inadequate
- Presence of hemorrhage
Rapid is a software package that provides for the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography) and MRI (Magnetic Image Resonance), as an aid to physician diagnosis. Rapid can be installed on a customer's Server or it can be accessed online as a virtual system. It provides viewing, quantification, analysis and reporting capabilities.
Rapid is a Software as a Medical Device (SaMD) consisting of one or more Rapid Servers (dedicated or virtual) in on-premises or hybrid (on-premises/cloud) configurations. The Rapid Server is an image processing engine that connects to a hospital LAN, or inside the Hospital Firewall in the on-premises configuration or in conjunction with a secure link to the cloud in the hybrid configuration. It can be a dedicated Rapid Server or a VM Rapid appliance, which is a virtualized Rapid Server that runs on a dedicated server.
Rapid is designed to streamline medical image processing tasks that are time consuming and fatiguing in routine patient workup. Once Rapid is installed it operates with minimal user interaction. Once the CT (NCCT, CT, CTA) or MR (MR, MRA) data are acquired, the CT or MRI console operator selects Rapid as the target for the DICOM images, and then the operator selects which study/series data to be sent to Rapid. Based on the type of incoming DICOM data. Rapid will identify the data set scanning modality and determine the suitable processing module. The Rapid platform is a central control unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today.
Here's an analysis of the provided text to fulfill your request, noting that the document is an FDA 510(k) clearance letter and summary, which typically focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed de novo device performance study. Therefore, some of the requested information (like specific effect sizes from MRMC studies or detailed ground truth establishment for a training set) might not be explicitly present if the submission didn't require entirely new clinical performance data for clearance.
Key Observation from the Document:
The document (K233512) is a 510(k) summary for iSchemaView Rapid (6.0), claiming substantial equivalence to a previously cleared predicate device, Rapid (K213165). The primary change appears to be an "extension of installation in a hybrid configuration (on-premises and hybrid)." This implies that extensive new clinical performance studies for the core functionality may not have been required, as the device is deemed "as safe and effective as the previously cleared Rapid (K213165) with an extension of installation in a hybrid configuration."
Given this, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are largely framed around demonstrating equivalence to the predicate and ensuring the new configuration doesn't introduce new safety or effectiveness concerns.
Acceptance Criteria and Device Performance (Based on the provided document)
Since this is a 510(k) submission for substantial equivalence based on a predicate, the "acceptance criteria" are implied to be that the device performs equivalently to the predicate and any new features (like hybrid configuration) do not negatively impact safety or effectiveness. The document highlights software verification and validation as the primary means of demonstrating compliance.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implied from Section 510(k) Summary and "Performance Data") | Reported Device Performance (as stated in the document) |
---|---|
Functional Equivalence to Predicate Device: |
- Image viewing, processing, and analysis of CT/MRI images for functional and dynamic imaging datasets.
- Specific modules: CT-Perfusion, MR-Perfusion, DWI, Dynamic Analysis, NCCT maps (hypodense/hyperdense tissue), CTA.
- Aid in selection of acute stroke patients (with known occlusion of intracranial ICA or proximal MCA).
- Calculation of parameters related to tissue flow (perfusion) and tissue blood volume. | "Rapid has the same intended use and similar indications, technological characteristics and principles of operation as its predicate devices."
"Rapid is as safe and effective as the previously cleared Rapid (K213165) with an extension of installation in a hybrid configuration..." |
| Technical Compliance: - DICOM compliance.
- Operates on standard off-the-shelf computers or virtual platforms.
- Handles DICOM medical image data (CT, MRI) from various sources.
- Secure communication protocols (SMTP with security extensions, VPN, SSH). | "Rapid complies with DICOM (Digital Imaging and Communications in Medicine) - Developed by the American College of Radiology and the National Electrical Manufacturers Association. NEMA PS 3.1 - 3.20."
"Rapid is a DICOM-compliant PACS software..."
"Rapid runs on standard 'off-the-shelf' computer and networking hardware."
"Rapid generally connects to the infrastructure of the medical partner... Rapid uses a SMTP protocol with security extensions to provide secure communications."
"Available types of connection include VPN (Virtual Private Network - RFC2401 and RFC4301 Standards) Tunnel and SSH (Secure Shell)." |
| Performance Accuracy & Reliability: - Accurate representation of key processing parameters.
- Handles clinically relevant parameters and perturbations.
- Meets all design requirements and specifications. | "iSchemaView conducted extensive performance validation testing and software verification and validation testing of the Rapid system."
"This performance validation testing demonstrated that the Rapid system provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations associated with the intended use of the software."
"Software performance, validation and verification testing demonstrated that the Rapid system met all design requirements and specifications."
"The Rapid System performance has been validated with phantom and case data." |
| Safety & Effectiveness (no new issues compared to predicate): - Compliance with QSR (21 CFR Part 820.30).
- Risk management (EN ISO 14971:2019).
- Software lifecycle processes (IEC 62304:2016).
- Usability engineering (IEC 62366:2015). | "Rapid has been designed, verified and validated in compliance with 21 CFR, Part 820.30 requirements. The device has been designed to meet the requirements associated with EN ISO 14971:2019 (risk management)."
"Rapid raises no new issues of safety or effectiveness compared to Rapid (K2131650), as demonstrated by the testing conducted with Rapid." |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "The Rapid System performance has been validated with phantom and case data." However, it does not specify the sample size for the test set of "case data" or "phantom data", nor does it specify the country of origin or whether the data was retrospective or prospective. For a 510(k), particularly one proving substantial equivalence to a predicate, new large-scale clinical studies are not always required if software verification and validation suffice, as implied here.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The document states, "The primary users of Rapid software are medical imaging professionals who analyze tissue using CT or MRI images." However, it does not specify the number of experts used to establish ground truth for the test set, nor does it provide their specific qualifications (e.g., number of years of experience, specific board certifications). It only generically refers to "trained professionals, including but not limited to physicians and medical technicians."
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1) used for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done comparing human readers with AI vs. without AI assistance, nor does it state an effect size for such an improvement. The focus is on the device's standalone performance and its equivalence to the predicate.
6. If a Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the document implies that a standalone performance evaluation of the algorithm's core processing capabilities was conducted. It states: "iSchemaView conducted extensive performance validation testing and software verification and validation testing of the Rapid system. This performance validation testing demonstrated that the Rapid system provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations associated with the intended use of the software." This refers to the algorithm's performance in processing images and generating analyses.
7. The Type of Ground Truth Used
The document states, "The Rapid System performance has been validated with phantom and case data." This suggests that the ground truth for "phantom data" would be known physical or simulated values. For "case data," the document does not explicitly state the type of ground truth, such as expert consensus, pathology, or outcomes data. However, given the context of stroke patient selection, clinical outcomes or expert consensus on imaging findings would typically be relevant for such applications.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set. As this is a 510(k) for an updated version of an existing device, it's possible that the training data for the core AI components was part of earlier development and was not re-evaluated for this specific submission, or that detailed training data was not a required element for this type of substantial equivalence claim.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established.
Ask a specific question about this device
(260 days)
AccuCTP is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians. The software runs on a standard off-the-shelf computer, and can be used to perform image viewing, processing and analysis of brain images. Data and images are acquired through DICOM compliant imaging devices.
AccuCTP provides both viewing and analysis capabilities for functional and dynamic imaging datasets acquired with CT Perfusion (CT-P), which can visualize and analyze dynamic imaging data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to tissue blood volume.
AccuCTP is a standalone software package that provides visualization and study of changes of tissue perfusion in digital images captured by CT (Computed Tomography). The software provides viewing, quantification, analysis and reporting capabilities, and it allows repeated use and continuous processing of data and can be deployed on a supportive customer's PC that meets the minimum system requirements.
AccuCTP works with the DICOM compliant medical image data. AccuCTP provides tools for performing the following types of analysis:
- volumetry of threshold maps .
- time intensity plots for dynamic time courses .
- . measurement of mismatch between rCBF and Tmax threshold volumes obtained from the same scan.
The provided text, a 510(k) Summary for the AccuCTP device, focuses on demonstrating substantial equivalence to a predicate device (RAPID) rather than providing detailed acceptance criteria and the results of a statistically powered clinical study. However, it does outline performance validation activities.
Here's an analysis of the available information regarding acceptance criteria and performance studies, structured according to your request, with limitations noted due to the nature of the document:
1. Table of Acceptance Criteria and Reported Device Performance
The document states: "Parameter map and Volume results were quantitatively analysed and met the pre-defined pass/fail criteria." However, the specific numerical pre-defined pass/fail criteria are not explicitly stated in this document. The performance is reported in terms of agreement with a "ground truth" (phantom data) and agreement with the predicate device (RAPID CTP).
Acceptance Criteria (General) | Reported Device Performance (as stated in document) |
---|---|
Parameter map results met pre-defined pass/fail criteria | "Parameter map...results were quantitatively analysed and met the pre-defined pass/fail criteria." |
Volume results met pre-defined pass/fail criteria | "Volume results were quantitatively analysed and met the pre-defined pass/fail criteria." |
Agreement with ground truth in phantom test | Achieved, "Parameter map and Volume results were quantitatively analysed and met the pre-defined pass/fail criteria." |
Agreement with predicate device (RAPID CTP) for parameter maps and volume results | A "calculation performance validation was conducted to evaluate the agreement between AccuCTP and RAPID CTP in calculating the parameter maps as well as the volume results... met the pre-defined pass/fail criteria." |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document mentions a "group of phantoms" for the phantom test and a "calculation performance validation" using data to compare with RAPID CTP. However, the exact numerical sample size (number of CT perfusion studies or phantoms) used in these validation studies is not specified.
- Data Provenance: The document does not specify the country of origin for the data used in the "validation study" that compared AccuCTP to RAPID CTP. It also does not explicitly state whether the data was retrospective or prospective. The phantom study clearly used synthetic data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not mention the use of human experts to establish ground truth for the test sets.
- For the phantom test, the ground truth was inherently known from the design of the phantoms.
- For the "validation study" comparing AccuCTP to RAPID CTP, the ground truth was effectively the output of the predicate device (RAPID CTP), implying a comparison for concordance rather than independent expert adjudication.
4. Adjudication Method for the Test Set
No adjudication method involving human experts is described since the ground truth for the validation was either known from phantoms or based on the predicate device's output.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, the document does not describe an MRMC study. The validation described focuses on the agreement of AccuCTP's output (parameter maps and volumes) with physical phantoms and with the predicate device's output. There is no mention of human readers or AI assistance in diagnostic tasks.
- Effect Size of Human Improvement: Not applicable, as no MRMC study was conducted.
6. Standalone (Algorithm Only) Performance
Yes, the studies described are standalone performance evaluations of the AccuCTP algorithm. The phantom test directly evaluated the algorithm's accuracy against known physical properties, and the comparison with RAPID CTP assessed the algorithm's concordance with another software's output. The device is described as "a standalone software package."
7. Type of Ground Truth Used
- Phantom Test: The ground truth was known physical properties/measurements derived from the design of the phantoms.
- Validation Study (comparison with RAPID CTP): The "ground truth" for this comparison was effectively the results/output of the predicate device (RAPID CTP). This is a comparison of computational results for substantial equivalence, not a clinical ground truth for diagnostic accuracy (e.g., pathology, clinical outcomes).
8. Sample Size for the Training Set
The document does not specify the sample size of the training set used for developing or training the AccuCTP algorithm. Performance data in this section refers to validation testing, not training data.
9. How Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set (if supervised learning was used) was established, as it doesn't discuss the training phase of the algorithm development.
Ask a specific question about this device
(133 days)
Rapid is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians. The software runs on a standard off-the-shelf computer or as VMware, and can be used to perform image viewing, processing and analysis of images. Data and images are acquired through DICOM compliant imaging devices.
Rapid provides both viewing and analysis capabilities for functional and dynamic imaging datasets acquired with CT, CT Perfusion (CTP), CT Angiography (CTA), and MRI including a Diffusion Weighted MRI (DWI) Module and a Dynamic Analysis Module (dynamic contrast-enhanced imaging data for MRI and CT).
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue.
The DWI Module is used to visualize local water diffusion properties from the analysis of diffusion - weighted MRI data.
The Dynamic Analysis Module is used for visualization and analysis of dynamic imaging data, showing properties of changes in contrast over time. This functionality includes calculation of parameters related to tissue flow (perfusion) and tissue blood volume.
Rapid CT-Perfusion and Rapid MR-Perfusion can be used by physicians to aid in the selection of acute patients (with known ocuusion of the intracranial internal carotid artery or proximal middle cerebral artery)
Instructions for the use of contrast agents for this in Appendix A of the User's Manual. Additional information for safe and effective drug use is available in the product-specific iodinated CT and gadolinium-based MR contrast drug labeling.
In addition to the Rapid imaging criteria, patients must requirements for thrombectomy, as assessed by the physician, and have none of the following contraindications or exclusions:
- Bolus Quality: absent or inadequate bolus.
- Patient Motion: excessive motion leading to artifacts that make the scan technically inadequate .
- . Presence of hemorrhage
Rapid is a software package that provides for the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography) and MRI (Magnetic Image Resonance), as an aid to physician diagnosis. Rapid can be installed on a customer's Server or it can be accessed online as a virtual system. It provides viewing, quantification, analysis and reporting capabilities.
Rapid works with the following types of (DICOM compliant) medical image data:
- CT (Computed Tomography) ●
- MRI(Magnetic Image Resonance)
Rapid acquires (DICOM compliant) medical image data from the following sources:
- . DICOM file
- DICOM CD-R
- Network using DICOM protocol ●
Rapid provides tools for performing the following types of analysis:
- selection of acute stroke patients for endovascular thrombectomy ●
- volumetry of thresholded maps
- time intensity plots for dynamic time courses ●
- measurement of mismatch between labeled volumes on co-registered image ● volumes
- large vessel density
Rapid is a Software as a Medical Device (SaMD) consisting of one or more Rapid Servers (dedicated or virtual). The Rapid Server is an image processing engine that connects to a hospital LAN, or inside the Hospital Firewall. It can be a dedicated Rapid Server or a VM Rapid appliance, which is a virtualized Rapid Server that runs on a dedicated server.
Rapid is designed to streamline medical image processing tasks that are time consuming and fatiguing in routine patient workup. Once Rapid is installed it operates with minimal user interaction. Once the CT (NCCT. CT, CTA) or MR (MR, MRA) data are acquired, the CT or MRI console operator selects Rapid as the target for the DICOM images, and then the operator selects which study/series data to be sent to Rapid. Based on the type of incoming DICOM data, Rapid will identify the data set scanning modality and determine the suitable processing module. The Rapid platform is a central control unit which coordinates the execution image processing modules which support various analysis methods used in clinical practice today:
- Rapid CTP/MRP, DWI, Dynamic Analysis (Original: K121447, Updated with K172477; ● and K182130):
- Rapid CTA (K172477); ●
- Rapid ASPECTS(K190395); ●
- Rapid ICH (K193087);
- . Rapid LVO (K200941);
The iSchemaView Server is a dedicated server that provides a central repository for Rapid data. All iSchemaView Server data is stored on encrypted hard disks. It also provides a user interface for accessing Rapid data. It connects to a firewalled Data Center Network and has its own firewall for additional cyber/data security. The iSchemaView Server connects to one or more Rapid Servers via WAN. Available types of connection include VPN (Virtual Private Network - RFC2401 and RFC4301 Standards) Tunnel and SSH (Secure Shell).
Here's a breakdown of the acceptance criteria and study details for the Rapid device, specifically focusing on the NCCT Motion Artifact AI/ML Module performance, as described in the provided 510(k) summary:
Acceptance Criteria and Reported Device Performance (NCCT Motion Artifact AI/ML Module)
Metric | Acceptance Criteria (Optimal Performance from training validation) | Reported Device Performance (Final Independent Validation) |
---|---|---|
AUC | 0.95 | 0.96 (0.94, 0.97) |
Sensitivity | 0.95 | 0.91 (0.83, 0.95) |
Specificity | 0.96 | 0.86 (0.83, 0/89) |
Primary Endpoint | N/A (implied by meeting sensitivity/specificity targets for "weak artifact = 0") | Passed (weak artifact = 0) |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: N=619 axial image slices.
- Data Provenance: The text does not explicitly state the country of origin for the test set data. It mentions that samples were obtained from "Siemens, GE, Toshiba, Philips, and Neurologica" for training, and for the independent validation, "The samples were primarily from Siemens with GE mixed." This suggests a multi-vendor, and likely multi-site, collection. The study appears to be retrospective as it uses existing medical images for evaluation.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: 3
- Qualifications of Experts: Described as "experienced truthers." Specific qualifications (e.g., years of experience, subspecialty) are not provided.
-
Adjudication method for the test set:
- The document states "ground truth established by 3 experienced truthers." While it doesn't explicitly mention a 2+1 or 3+1 method, the implication of "established by" multiple experts suggests a consensus-based approach was used to determine the ground truth from these three experts. It does not state "none."
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not conducted or reported in this summary for the NCCT Motion Artifact AI/ML Module. The performance evaluation is for the standalone algorithm.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone algorithm performance study was done for the NCCT Motion Artifact AI/ML Module. The reported metrics (AUC, Sensitivity, Specificity) are for the algorithm's performance in detecting motion artifacts.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for the test set was established by expert consensus from 3 experienced truthers.
-
The sample size for the training set:
- Training Set: 23,066 axial image slices (Positive: 1,021, Negative: 12,877).
- Training Validation Set: 5,906 axial image slices (Positive: 422, Negative: 5,484).
-
How the ground truth for the training set was established:
- The document does not explicitly detail how the ground truth for the training data was established. However, given the context of medical image analysis and the subsequent use of "experienced truthers" for independent validation, it's highly probable that human expert review and labeling were also used to establish the ground truth for the training and training validation sets.
Ask a specific question about this device
(129 days)
icobrain ctp is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians. The software runs on a standard "off-the-shelf" computer or a virtual platform, such as VM ware, and can be used to perform image processing, and communication of computed tomography (CT) perfusion scans of the brain. Data and images are acquired through DICOM-compliant imaging devices.
icobrain ctp provides both analysis and communication capabilities for dynamic imaging datasets that are acquired with CT Perfusion imaging protocols. Analysis includes calculation of parameters related to tissue flow (perfusion) and tissue blood volume. Results of image processing which include CT perfusion parameter maps generated from a raw CTP scan are exported in the standard DICOM format and may be viewed on existing radiological imaging viewers.
The input images are CT perfusion images. During the pre-processing, each scan is loaded from the DICOM format: the image data and relevant dicom tags are extracted. The image processing block calculates the perfusion parameters and the volumes of the Tmax abnormality (defined as tissue with delayed arrival) and the CBF abnormality (defined as tissue with delayed arrival and critically decreased cerebral blood flow). Finally, the computed measurements are summarized into an electronic report. Optionally if requested, Tmax and CBF abnormalities segmentations are overlaid on the input images and image volumes of perfusion parameters maps are sent.
Here's an analysis of the acceptance criteria and study details for the icobrain-ctp device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document describes several performance tests. The specific numerical acceptance criteria are not always explicitly stated in the same detail as the results, but the type of metric used and the general outcome can be inferred.
Test Type | Metric/Acceptance Criteria | Reported Device Performance |
---|---|---|
Accuracy - Clinical Dataset (CBF & Tmax Abnormality Volumes vs. Reference Device) | Percentile 90 of the volume differences for both CBF abnormality and Tmax abnormality. | "All experiments passed the acceptance criteria." (Specific P90 values not provided, but implies they met the set thresholds) |
Accuracy - Clinical Dataset (Unbiased CBF Abnormality Volume vs. Manual DWI Delineation) | Percentile 90 of the volume differences. | "All experiments passed the acceptance criteria." (Specific P90 values not provided) |
Accuracy - Clinical Dataset (ROI Volume vs. Manual Annotation) | Percentile 90 of the volume differences. | "All experiments passed the acceptance criteria." (Specific P90 values not provided) |
Reproducibility - Clinical Dataset (Tmax & CBF Abnormality Volumes on Test/Retest) | Percentile 90 of the volume differences for both CBF abnormality and Tmax abnormality. | "All experiments passed the acceptance criteria." (Specific P90 values not provided) |
Accuracy - Digital Phantom (Perfusion Parameter Maps: CBV, CBF, MTT) | Correlation, Percentile 90 absolute difference, and mean relative difference between ground truth and estimated values. | "In the digital phantom, the correlation for each perfusion parameter was above 0.90." (Implies P90 absolute difference and mean relative difference also met their criteria, though specific values are not given) |
2. Sample Sizes Used for the Test Set and Data Provenance
- Sample Size for Clinical Test Set: Not explicitly stated, but mentioned as "a dataset of clinical CTP scans."
- Sample Size for Digital Phantom Test Set: Not explicitly stated, but described as "a wide range of clinically relevant values of perfusion parameters."
- Data Provenance (Clinical Test Set): "The subjects upon whom the software was tested include stroke patients." No specific country of origin is mentioned. It is described as a "clinical dataset," which typically implies retrospective use of existing patient data, but it is not definitively stated as prospective or retrospective.
- Data Provenance (Digital Phantom Test Set): Generated by simulating tracer kinetic theory.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document mentions "manually delineated DWI images" and "manually annotated ROI" for establishing ground truth in the clinical accuracy experiments. However:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified (e.g., specific medical specialty, years of experience).
4. Adjudication Method for the Test Set
The adjudication method is not explicitly stated. The text refers to "manually delineated DWI images" and "manually annotated ROI," which suggests expert involvement, but whether multiple experts were involved and an adjudication process (like 2+1 or 3+1) was used is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not described in the provided text. The performance evaluations focus on the algorithm's standalone performance against established ground truth (either from reference devices, manual expert delineation, or digital phantoms).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the studies described are standalone (algorithm-only) performance evaluations. The device's output (e.g., Tmax and CBF abnormality volumes, perfusion parameter maps) is compared directly to reference data or ground truth without human interaction as part of the primary outcome assessment.
7. The Type of Ground Truth Used
- Clinical Dataset:
- Comparison to a "reference device" for CBF and Tmax abnormality volumes.
- "Manually delineated DWI images" for unbiased CBF abnormality volume.
- "Manually annotated ROI" for ROI volume.
- Digital Phantom Dataset: Ground truth generated by "simulating tracer kinetic theory" for perfusion parameters (CBV, CBF, MTT).
8. The Sample Size for the Training Set
The document does not provide any information about the training set size or methodology. The performance testing section focuses solely on validation data.
9. How the Ground Truth for the Training Set Was Established
As no information is provided about the training set, there is also no information on how its ground truth was established.
Ask a specific question about this device
(99 days)
The syngo. CT Neuro Perfusion software package is designed to evaluate areas of brain perfusion. The software processes images or volumes that were reconstructed from continuously acquired CT data after the injection of contrast media. It generates the following result volumes:
- · Cerebral blood flow (CBF)
- Cerebral blood volume (CBV)
- · Local bolus timing (time to start (TTS), time to peak (TTP), time to drain (TTD))
- · Mean transit time (MTT)
- · Transit time to the center of the IRF (TMax)
- · Flow extraction product (permeability)
- · Temporal mip
- · Temporal average
- · Baseline volume
- Modified dynamic input data
The software also allows the calculation of mirrored regions or volumes of interest and the visual inspection of time attenuation curves. One clinical application is to visualize the apparent blood perfusion and the parameter mismatch in brain tissue affected by acute stroke.
Areas of decreased perfusion appear as areas of changed signal intensity:
- · Lower signal intensity for CBF and CBV
- · Higher signal intensity for TTP, TTD, MTT, and TMax
A second application is to visualize blood brain barrier disturbances by modeling extravascular leakage of blood into the interstitial space. This additional capability may improve the differential diagnosis of brain tumors and be helpful in therapy monitoring.
The syngo.CT Neuro Perfusion software allows the quantitative evaluation of dynamic CT data of the brain acquired during the injection of a compact bolus of iodinated contrast material. It mainly aids in the early differential diagnosis of acute ischemic stroke. The Blood-brain-barrier (BBB) imaging feature supports the diagnostic assessment of brain tumors.
By providing images of e.q. cerebral blood flow (CBF), cerebral blood volume (CBV), time to peak (TTP), and Mean Transit Time (MTT) from one set of dynamic CT images or volumes, syngo.CT Neuro Perfusion allows a quick and reliable assessment of the type and extent of cerebral perfusion disturbances, including fast evaluation of the tissue at risk and non-viable tissue in the brain. The underlying approaches for this application were cleared as part of the predicate device and remain unchanged in comparison to the predicate device.
syngo.CT Neuro Perfusion allows simultaneous multi-slice processing and supports the workflow requirements in a stroke workflow. The availability of flow extraction product imaging extends the option to the diagnosis of brain tumors. A listing of device modifications as part of the new software version VB20 of syngo.CT Neuro Perfusion is as follows:
- · Auto Stroke Workflow
(Calculation and display of the stroke results without user input) - . Rapid Results Technology (Calculates stroke results and quality control images without user input and sends all images to other DICOM nodes)
- Additional Parameters for Penumbra and Core Infarct Calculation
This software is designed to operate on at least the syngo.via VB20 hardware/software platform, and should be used with reconstructed images that meet the following minimum requirements:
- Images should be reconstructed with the high sampling frequency. Scan modes ● are e. g. adaptive 4D spiral, Dynamic sequence and dynamic multi-scan modes of Siemens CT scanners.
- A standard reconstruction kernel should be used
- . Images should be reconstructed with an increment smaller than the slice thickness to achieve good resolution
The provided text describes the syngo.CT Neuro Perfusion software, but it does not include a table of acceptance criteria with reported device performance or details of a specific comparative study. Instead, it focuses on demonstrating substantial equivalence to a predicate device through shared technological characteristics and general software verification and validation.
Here's a breakdown of the requested information based on the provided text, highlighting what is present and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
- Not provided. The document states that "The testing results support that all the software specifications have met the acceptance criteria" and "The results of these tests demonstrate that the subject device performs as intended." However, specific quantitative acceptance criteria or corresponding reported performance metrics are not given.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not provided. The document mentions "non-clinical tests" and "verification/validation testing" but does not specify the sample size of the test set (e.g., number of cases or patients) or the provenance (country of origin, retrospective/prospective nature) of any data used for testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not provided. There is no mention of experts being used to establish a ground truth for any test set. The document focuses on performance testing related to software functionality and specifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not provided. Since no expert review or ground truth establishment based on human readers is described, there is no mention of an adjudication method.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No such study described. The document does not describe an MRMC comparative effectiveness study involving human readers or any effect size related to AI assistance. The focus is on the device's technical performance and its substantial equivalence to a predicate.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Implied standalone testing, but not explicitly detailed. The "Non-Clinical Testing Summary" mentions "Performance tests were conducted to test the functionality of the syngo.CT Neuro Perfusion" and that "The results of these tests demonstrate that the subject device performs as intended." This suggests standalone testing of the algorithm's functionality, but the specifics of how this was measured (e.g., against what gold standard) for clinical parameters are not elaborated. The claims are focused on the software's ability to generate specific result volumes (CBF, CBV, TTP, etc.).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not explicitly stated for clinical ground truth. The document does not specify a type of clinical ground truth (like pathology or outcomes data) used for comparing the device's generated parameters to a reference. The testing described appears to be primarily focused on verifying that the software's outputs are consistent with its design specifications and computational models, rather than an external clinical gold standard.
8. The sample size for the training set
- Not provided. The document describes performance testing in support of substantial equivalence and software verification/validation. It does not mention a training set or its sample size, indicating that this submission is not primarily based on a new AI model requiring training data. The underlying approaches are stated to be "unchanged in comparison to the predicate device."
9. How the ground truth for the training set was established
- Not applicable. As no training set is mentioned or implied for a new AI model, the method for establishing its ground truth is not discussed.
Ask a specific question about this device
Page 1 of 1