Search Results
Found 9 results
510(k) Data Aggregation
(108 days)
Rapid OH is a radiological computer aided triage and notification software indicated for suspicion of Obstructive Hydrocephalus (OH) in non-enhanced CT head images of adult patients. The device is intended to assist trained clinicians in workflow prioritization triage by providing notification of suspected findings in head CT images.
Rapid OH uses an artificial intelligence algorithm to analyze images and highlight cases with suspected OH on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected OH findings. Notifications include compressed preview images, that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of Rapid OH are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Contraindications/Limitations/Exclusions:
- Rapid OH is intended for use for adult patients.
- Input data image series containing excessive patient motion or metal implants may impact module analysis accuracy, robustness and quality.
- Ventriculoperitoneal shunts are contraindicated
Exclusions:
- Series with missing slices or improperly ordered slices
- data acquired at x-ray tube voltage < 100kVp or > 140kVp.
- data not representing human head or head/neck anatomical regions
Rapid OH software device is a radiological computer-aided triage and notification software device using AI/ML. The Rapid OH device is a non-contrast CT (NCCT) processing module which operates within the integrated Rapid Platform to provide a notification of suspected findings of obstructive hydrocephalus (OH). The Rapid OH device is SaMD which analyzes input NCCT images that are provided in DICOM format for notification of suspected findings for workflow prioritization.
Here's a breakdown of the acceptance criteria and study details for the Rapid OH device, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
| Metric | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Primary Endpoint: Sensitivity (Se) | Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval. | 89.5% (95% CI: 0.837-0.935) |
| Primary Endpoint: Specificity (Sp) | Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval. | 97.6% (95% CI: 0.940-0.991) |
| Secondary Endpoint: Time to Notification | Not explicitly stated as a numerical acceptance criterion, but the reported performance indicates efficiency. | 30.3 seconds (range 10.5-55.5 seconds) |
Note: The document states "Standalone performance primary endpoint passed with sensitivity (Se) of 89.5% (95% CI:0.837-0.935) and specificity (Sp) of 97.6% (95% CI:0.940-0.991)". While explicit numerical acceptance criteria for sensitivity and specificity are not provided, the "passed" statement implies that the reported performance fell within pre-defined acceptable ranges or met a statistical hypothesis.
2. Sample Size for the Test Set and Data Provenance
- Sample Size for Test Set: 320 cases
- Data Provenance: The document mentions "diversity amongst demographics (M: 45%, F: 54%); Sites (and manufacturers (GE, Philips, Siemens, Toshiba) and confounders (ICH, Ischemic Stroke, Tumor, Cyst, Aqueductal stenosis, Mass effect, Brain atrophy and Communicating hydrocephalus)". While specific countries of origin are not explicitly stated, the mention of multiple manufacturers (Siemens, GE, Toshiba, Philips) and multiple sites (74 sites for algorithm development, and "Sites" for the validation set) suggests a diverse, likely multi-site, and potentially multi-country dataset, although this is not definitively confirmed for the test set itself. The dataset appears to be retrospective, as it's used for algorithm development and validation based on existing cases.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 experts (implied from "Truthing was established using 2:3 experts.")
- Qualifications of Experts: Not explicitly stated in the provided text. They are referred to as "experts." In regulatory contexts, these would typically be radiologists or neuro-radiologists with significant experience in interpreting head CTs.
4. Adjudication Method for the Test Set
- Adjudication Method: "2:3 experts." This means that ground truth was established by agreement from at least 2 out of 3 experts.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done: No, an MRMC comparative effectiveness study was not explicitly mentioned for this device. The study described is a standalone performance validation of the algorithm.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop) Done
- Standalone Performance Done: Yes, "Final device validation included standalone performance validation. This performance validation testing demonstrated the Rapid OH device provides accurate representation of key processing parameters under a range of clinically relevant conditions associated with the intended use of the software." The reported sensitivity and specificity values are for this standalone performance.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus ("Truthing was established using 2:3 experts.")
8. Sample Size for the Training Set
- Sample Size for Training Set: 3340 cases (This refers to "Algorithm development" which encompasses training and likely internal validation/development sets).
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth Was Established (Training Set): The document states "Algorithm development was performed using 3340 cases... Truthing was established using 2:3 experts." This implies that the same expert consensus method (2 out of 3 experts) used for the test set was also used to establish ground truth for the cases used in algorithm development (which includes the training set).
Ask a specific question about this device
(15 days)
Rapid DeltaFuse is an image processing software package to be used by trained professionals, including but not limited to physicians and medical technicians.
The software runs on a standard off-the-shelf computer or a virtual platform, such as VMware, and can be used to perform image viewing, processing, and analysis of images.
Data and images are acquired through DICOM compliant imaging devices.
Rapid DeltaFuse provides both viewing and analysis capabilities for imaging datasets acquired with Non-Contrast CT (NCCT) images.
The CT analysis includes NCCT maps showing areas of hypodense and hyperdense tissue including overlays of time differentiated scans of the same patient.
Rapid DeltaFuse is intended for use for adults.
Rapid DeltaFuse (DF) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It provides visualization of time differentiated neuro hyperdense and hypodense tissue from Non-Contrast CT (NCCT) images.
Rapid DF is integrated into the Rapid Platform which provides common functions and services to support image processing modules such as DICOM filtering and job and interface management along with external facing cyber security controls. The Integrated Module and Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.
The provided FDA 510(k) clearance letter for Rapid DeltaFuse describes the acceptance criteria and the study that proves the device meets those criteria, though some details are absent.
Here's a breakdown of the information found in the document, structured according to your request:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated in a quantified manner as a target. Instead, the document describes the type of performance evaluated and the result obtained.
| Acceptance Criteria (Implied/Description of Test) | Reported Device Performance |
|---|---|
| Co-registration accuracy for slice overlays | DICE coefficient of 0.94 (Lower Bound 0.93) |
| Software performance meeting design requirements and specifications | "Software performance testing demonstrated that the device performance met all design requirements and specifications." |
| Reliability of processing and analysis of NCCT medical images for visualization of change | "Verification and validation testing confirms the software reliably processes and supports analysis of NCCT medical images for visualization of change." |
| Performance of Hyperdensity and Hypodensity display with image overlay | "The Rapid DF performance has been validated with a 0.95 DICE coefficient for the overlay addition to validate the overlay performance..." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 14 cases were used for the co-registration analysis. The sample size for other verification and validation testing is not specified.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- This information is not provided in the document. The document refers to "performance validation testing" and "software verification and validation testing" but does not detail the involvement of human experts or their qualifications for establishing ground truth.
4. Adjudication Method for the Test Set
- This information is not provided in the document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was reported. The document focuses on the software's performance (e.g., DICE coefficient for co-registration) rather than its impact on human reader performance.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was done. The reported DICE coefficients (0.94 and 0.95) are measures of the algorithm's performance in co-registration and overlay addition, independent of human interaction.
7. Type of Ground Truth Used
- The document implies that the ground truth for co-registration and overlay performance was likely established through a reference standard based on accurate image alignment and feature identification, against which the algorithm's output (DICOM images with overlays) was compared. The exact method of establishing this reference standard (e.g., manual expert annotation, a different validated algorithm output) is not explicitly stated.
8. Sample Size for the Training Set
- The document does not specify the sample size used for training the Rapid DeltaFuse algorithm.
9. How Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established.
Ask a specific question about this device
(210 days)
The Rapid MLS software device is designed to measure the midline shift of the brain from a NCCT acquisition and report the measurements. Rapid MLS analyzes adult cases using machine learning algorithms to identify locations and measurements of the expected brain midline and any shift which may have occurred. The Rapid MLS device provides the user with annotated images showing measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of NCCT cases.
Rapid MLS software device is a radiological computer-assisted image processing software device using AI/ML. The Rapid MLS device is a non-contrast CT (NCCT) processing module which operates within the integrated Rapid Platform to provide a measurement of the brain midline. The Rapid MLS software analyzes input NCCT images that are provided in DICOM format and provides both a visual output containing a color overlay image displaying the difference between the expected and indicated brain midline at the Foramen of Monro; and a text file output (json format) containing the quantitative measurement.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for Rapid MLS (K243378):
Acceptance Criteria and Device Performance
The core of the acceptance criteria for Rapid MLS appears to be its ability to measure midline shift with an accuracy comparable to or better than human experts.
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Mean Absolute Error (MAE) of Rapid MLS non-inferior to MAE of experts. | Rapid MLS MAE: 0.7 mm |
Experts Average Pairwise MAE: 1.0 mm | |
| Intercept of Passing-Bablok fit (Rapid MLS vs. Reference MLS) close to 0. | Intercept: 0.12 (0, 0.2) |
| Slope of Passing-Bablok fit (Rapid MLS vs. Reference MLS) close to 1. | Slope: 0.95 (0.9, 1.0) |
| No bias demonstrated in differences between Rapid MLS and reference MLS. | Paired t-test p-value: 0.1800 (indicates no significant bias) |
Study Details
Here's a detailed summary of the study proving the device meets the acceptance criteria:
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size:
153 NCCT cases - Data Provenance:
- Country of Origin: Not explicitly stated for all cases, but sourced from
13 sites (2 OUS [Outside US], 11 US). This indicates a mix of international and domestic data. - Retrospective or Prospective: Not explicitly stated, but the description of "validation data was sourced and blinded independent of the development cases" and "demographic split for age and gender... used to test for broad demographic representation and avoidance of overlap bias with development" suggests these were pre-existing, retrospectively collected cases (i.e., not prospectively collected for this trial).
- Scanner Manufacturers: Mixed from
GE, Philips, Toshiba, and Siemens scanners. - Demographics: Male:
44%, Female:56%, Age Range:26-93 years.
- Country of Origin: Not explicitly stated for all cases, but sourced from
- Sample Size:
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
- Number of Experts:
3 experts - Qualifications of Experts: Not explicitly stated, but the context implies they are medical professionals who use midline shift as a clinical metric, likely radiologists or neurologists.
- Number of Experts:
-
Adjudication Method for the Test Set:
- Method:
Expert consensuswas used to establish ground truth. The document states "ground truth established by 3 experts." This implies a consensus approach, but the specific method (e.g., majority vote, discussion to consensus) is not detailed. The "experts average pairwise MAE" suggests individual expert measurements were consolidated. It is not explicitly stated whether a 2+1 or 3+1 method was used, but given there were 3 experts, it's likely they reached a consensus view.
- Method:
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
- The study does compare the device's performance to human experts, but it's not explicitly described as a traditional MRMC comparative effectiveness study where human readers use the AI and then are compared to human readers without AI.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: This specific comparison (human with AI vs. human without AI) was not the primary focus of the reported performance study. The study primarily evaluated the standalone performance of the AI in comparison to expert measurements (i.e., the AI as a "reader" vs. expert "readers"). The "Indications for Use" state that the results "are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of NCCT cases," implying it's an assistive tool, but the study described measures the AI's accuracy against experts, not the improvement of experts with the AI.
-
If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:
- Yes. The document states, "Final device validation included standalone performance validation." The reported MAE of the Rapid MLS and its comparison to the experts' pairwise MAE directly reflect its standalone performance.
-
The Type of Ground Truth Used:
- Ground Truth Type:
Expert Consensusfrom the 3 experts.
- Ground Truth Type:
-
The Sample Size for the Training Set:
- Training Set Sample Size:
138 cases
- Training Set Sample Size:
-
How the Ground Truth for the Training Set Was Established:
- The document implies that the "Algorithm development was performed using 162 cases from multiple sites; training included 24 cases for validation and 138 for training." While it doesn't explicitly state how ground truth was established for the training set, it is highly probable that a similar (if not identical) process involving human expert annotation was used, given the reliance on expert consensus for the validation/test set. The development cases were chosen to cover
0-18.6 mm offsets from expected midline, indicating a process of identifying and labeling the midline shift in these cases.
- The document implies that the "Algorithm development was performed using 162 cases from multiple sites; training included 24 cases for validation and 138 for training." While it doesn't explicitly state how ground truth was established for the training set, it is highly probable that a similar (if not identical) process involving human expert annotation was used, given the reliance on expert consensus for the validation/test set. The development cases were chosen to cover
Ask a specific question about this device
(198 days)
Rapid Aneurysm Triage and Notification (ANRTN) is a radiological computer-assisted triage and notification software device for analysis of CT images of the head. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected saccular aneurysms during routine patient care. Rapid ANRTN uses an artificial intelligence algorithm to analyze images and highlight studies with suspected saccular aneurysms in a standalone application for study list prioritization or triage in parallel to ongoing standard of care. The device generates compressed preview images that are meant for informational purposes only and not intended for diagnostic use. The device does not alter the original medical image and is not intended to be used as a diagnostic device. Analyzed images are available for review through the PACS, email and mobile application. When viewed the images are for informational purposes only and not for diagnostic use. The results of Rapid ANRTN, in conjunction with other clinical information and professional judgment, are to be used to assist with triage/prioritization of saccular aneurysm cases. Radiologists who read the original medical images are responsible for the diagnostic decision. Rapid ANRTN is limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis.
Rapid ANRT is limited to detecting saccular aneurysms at least 4mm in diameter in adults.
Rapid ANRTN software device is a radiological computer-assisted image processing software device. The Rapid ANRTN device is a CTA processing module which operates within the integrated Rapid Platform to determine the suspicion of head saccular aneurysm(s). The ANRTN software analyzes input CTA images that are provided in DICOM format and provides notification of suspected saccular aneurysm(s) and a non-diagnostic, compressed image for preview. Rapid ANRTN is an AI/ML image processing module which integrates within the Rapid Platform.
The provided text describes the acceptance criteria and the study that proves the device (Rapid Aneurysm Triage and Notification - Rapid ANRTN) meets these criteria.
Here's the breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
| Metric | Acceptance Criteria (Product Code QFM Definition) | Reported Device Performance |
|---|---|---|
| AUC (for overall performance) | > 0.95 (for high performance) | > 0.95 |
| Sensitivity | Not explicitly defined as a threshold, but reported as a key metric. | 0.933 |
| Specificity | Not explicitly defined as a threshold, but reported as a key metric. | 0.868 |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: 266 CTA cases (151 positive for aneurysm, 115 negative).
- Data Provenance:
- Country of Origin: Not explicitly stated in the provided text.
- Retrospective or Prospective: Not explicitly stated, but the mention of cases "obtained from Siemens, GE, Toshiba, and Philips scanners" and "698 (633 training, 65 validation) CTA cases from multiple sites" suggests a retrospective collection of existing imaging data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: 3 experts.
- Qualifications of Experts: Not explicitly stated beyond "experts." It is typically assumed these are trained medical professionals (e.g., radiologists) with relevant experience, but specific qualifications are not detailed in the provided text.
4. Adjudication method for the test set
- Adjudication Method: "Ground truth established by 3 experts." This implies a consensus-based approach, but the specific adjudication method (e.g., majority vote, specific tie-breaking rules, or if all 3 had to agree) is not explicitly detailed (e.g., 2+1, 3+1). It likely refers to a consensus reading among the three experts.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated or described. The study focused on the standalone performance of the algorithm. The device's intended use is to "assist hospital networks and trained radiologists in workflow triage," implying an assistive role to humans, but the provided data only shows the algorithm's performance, not human performance with and without assistance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, a standalone performance validation was done. The text explicitly states: "Final device validation included standalone performance validation." and "This performance validation testing demonstrated the Rapid ANRTN device provides accurate representation of key processing parameters under a range of clinically relevant perturbations associated with the intended use of the software."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Expert consensus. The text states, "ground truth established by 3 experts."
8. The sample size for the training set
- Training Set Sample Size: 633 CTA cases. (The broader algorithm development dataset included 698 total, split into 633 training and 65 validation cases, with the 266 cases being the final performance validation set).
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: The text states, "Algorithm development was performed using 698 (633training, 65 validation) CTA cases from multiple sites." While it mentions the cases were "selected [to] covered a wide range of suspected saccular aneurysms," the specific method for establishing ground truth for the training set (e.g., expert review, clinical reports, or a combination) is not explicitly detailed in the provided document. It is implied, but not stated, that a similar expert review process would have been used as for the test set.
Ask a specific question about this device
(116 days)
Rapid ICH is a radiological computer aided triage and notification software in the analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, for IPH, IVH, SAH, and SDH Intracranial Hemorrhages (CH).
Rapid ICH uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images, which are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not is a a diagnostic device.
The results of Rapid ICH are intended to be used in conjunction and based on professional judgment, to assist with trage /prioritization of medical images. Notified radiologists are responsible for viewing full images per the standard of care.
Rapid ICH is a radiological computer-assisted triage and notification software device. The Rapid ICH module is a non-enhanced CT (NCCT) processing module which operates within the integrated Rapid Platform to provide triage and notification of suspected intracranial hemorrhage. The Rapid ICH module is an AI/ML module. The output of the module is a priority notification to clinicians indicating the suspicion of ICH based on positive findings. The Rapid ICH module uses the basic services supplied by the Rapid Platform including DICOM processing, job management, imaging module execution and imaging output including the notification and compressed image.
Here's a breakdown of the acceptance criteria and study details for the Rapid ICH device, based on the provided text:
Acceptance Criteria and Device Performance
The primary performance goals for Rapid ICH were defined by sensitivity and specificity thresholds.
Acceptance Criteria Table and Reported Device Performance:
| Parameter | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Overall Sensitivity | >80% | 96.8% (95% CI: 92.6% - 98.6%) |
| Overall Specificity | >80% | 100% (95% CI: 97.7% - 100%) |
| AUC (Using Rapid Estimated Volume as predictor of Suspected ICH) | Not explicitly stated as a pass/fail criterion, but reported | 0.98632 |
| Time to Notification (Compared to Time to Open Exam in Standard of Care) | Significantly faster than standard of care | Rapid ICH: 0.65 minutes (95% CI 0.63 - 0.67) Standard of Care: 72.58 minutes (95% CI 45.02 - 100.14) |
Study Details
2. Sample Size and Data Provenance:
- Test Set Sample Size: 314 cases (148 ICH positive, 166 ICH negative).
- Data Provenance: Retrospective, multicenter, multinational study. Specific countries are not detailed, but "multinational" implies diverse geographical origins.
3. Number of Experts and Qualifications for Ground Truth:
- Number of Experts: Not explicitly stated how many individual experts established the ground truth. The document mentions "expert reader truthing of the data," suggesting one or more experts.
- Qualifications of Experts: The document states "trained radiologists" are intended users and mentions "expert reader truthing." However, specific qualifications such as years of experience, board certification, or subspecialty are not provided for the ground truth experts.
4. Adjudication Method for the Test Set:
- The document implies ground truth was established by "expert reader truthing of the data," but does not specify an adjudication method (e.g., 2+1, 3+1, consensus review process if multiple readers were involved).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study was NOT mentioned for evaluating human readers' improvement with AI assistance. The study focused on the standalone performance of the AI algorithm (accuracy) and the time-to-notification benefit.
6. Standalone Performance (Algorithm Only):
- Yes, a standalone performance study was done. The reported sensitivity, specificity, and AUC values directly reflect the algorithm's performance in identifying ICH presence. The study evaluated the software's performance in identifying abnormalities, and the "time to notification" indicates the speed of the algorithm's output.
7. Type of Ground Truth Used:
- Expert Consensus: The ground truth for the test set was established through "expert reader truthing of the data." This implies a clinical expert (radiologist) determined the presence or absence of ICH.
8. Sample Size for the Training Set:
- The document states that the "minor change causing this filing, is the use of additional data for training and validation," implying the training set for this iteration of the device included more data than the predicate. However, the specific sample size of the training set is not provided in the summary.
9. How the Ground Truth for the Training Set was Established:
- Similar to the test set, the document indicates that the device was trained and validated using "retrospective case data based on expert reader truthing of the data." This suggests the ground truth for the training set was also established by expert review/diagnosis by clinical experts.
Ask a specific question about this device
(29 days)
Rapid LVO is a radiological computer aided triage and notification software indicated for use in the analysis of CTA head images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive ICA or MCA-M1 Large Vessel Occlusion (LVO) findings in head CTA images.
Rapid LVO uses a software algorithm to analyze images and highlight cases with suspected LVO on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected LVO findings. Notifications include compressed preview images. These are meant for informational purposes only and are not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of Rapid LVO are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage /prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Rapid LVO is a radiological computer-assisted triage and notification software device. The Rapid LVO module is a contrast enhanced CTA module which operates within the integrated Rapid Platform to provide triage and notification of suspected ICA and MCA-M1 Large Vessel Occlusion (LVO) based on the following definitions:
ICA Occlusion: A high-grade stenosis or occlusion of the intracranial portion of the ICA.
MCA-M1 Occlusion: A high-grade stenosis or occlusion of the horizontal segment of the MCA-M1, defined as the segment which extends from the ICA terminus until the vessel has turned upward into the Sylvian fissure. This includes post-bifurcation M1 segments in some patients.
The LVO module uses traditional programming algorithms. The output of the module is a priority notification to clinicians indicating the suspicion of LVO based on positive findings. The Rapid LVO module uses the basic services supplied by the Rapid Platform including DICOM processing, job management, imaging module execution and imaging output including the notification and compressed image.
The Rapid LVO device, a radiological computer-aided triage and notification software for detecting Large Vessel Occlusions (LVO) in CTA head images, was evaluated against specific acceptance criteria.
- Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Sensitivity (Se): Lower bound of 95% Confidence Interval (CI) ≥ 80% | 0.96 (95% CI: 0.91 - 0.97) |
| Specificity (Sp): Lower bound of 95% Confidence Interval (CI) ≥ 80% | 0.98 (95% CI: 0.93 - 0.99) |
| Time to Notification: ≤ 3.5 minutes | 3.18 minutes (95% CI: 3.11 - 3.25) |
Additionally, the following performance metrics were reported:
- Positive Predictive Value (PPV): 0.98
- Negative Predictive Value (NPV): 0.96
- Receiver Operating Characteristic (ROC) AUC: 0.99
-
Sample Size and Data Provenance for the Test Set
- Sample Size: 217 scans (135 positive LVO cases, 82 negative LVO cases).
- Data Provenance: The data was collected from 8 sites/studies, with locations in both the US and OUS (Outside the US). The document does not explicitly state if the data was retrospective or prospective, but clinical validation testing typically uses retrospective data for ground truth establishment.
-
Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three expert neuroradiologists.
- Qualifications: They are described as "expert neuroradiologists." Specific years of experience are not provided.
-
Adjudication Method for the Test Set
- The ground truth was established using a "2:3 concurrence" method. This implies that at least two out of the three expert neuroradiologists had to agree on the presence or absence of an LVO for a case to be assigned its ground truth label.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was mentioned. The study focused on the standalone performance of the algorithm.
-
Standalone Performance Study
- Yes, a standalone performance study was conducted. The reported sensitivity, specificity, PPV, NPV, and ROC AUC are all measures of the algorithm's performance without human-in-the-loop assistance.
-
Type of Ground Truth Used
- The ground truth was established by "expert neuroradiologists" using a "2:3 concurrence" method. This indicates expert consensus was the basis for the ground truth.
-
Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set. It only details the validation set used for performance testing.
-
How Ground Truth for the Training Set Was Established
- The document does not provide details on how the ground truth for the training set was established. It only describes the ground truth establishment for the validation/test set.
Ask a specific question about this device
(84 days)
Rapid PE Triage and Notification (PETN) is a radiological computer aided triage and notification software indicated for use in the analysis of CTPA images. The device is intended to assist hospital networks and trained clinicians in workflow triage by flagging and communication of suspected positive findings of central pulmonary embolism (PE) pathology in adults. The software is only intended to be used on single-energy exams.
Rapid PETN uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of Rapid PETN are intended to be used in conjunction with other patient information and based on ther professional judgment, to assist with trage/proritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care. Rapid PETN is validated for use on GE, Siemens and Toshiba scanners.
Rapid PETN is a radiological computer-assisted triage and notification software device. The Rapid PETN module is a contrast enhanced CTPA processing module which operates within the integrated Rapid Platform to provide triage and notification of suspected Central Pulmonary Emboli (PE). The PETN module is an AI/ML module. The output of the module is a priority notification to clinicians indicating the suspicion of central PE based on positive findings. The Rapid PETN module uses the basic services supplied by the Rapid Platform including DICOM processing, job management, imaging module execution and imaging output including the notification and compressed image.
Here's a summary of the acceptance criteria and study details for iSchemaView Inc.'s Rapid PE Triage and Notification (PETN) device, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria were defined by the primary endpoint of the standalone performance validation study.
| Acceptance Criteria (Primary Endpoint) | Reported Device Performance |
|---|---|
| Sensitivity ≥ 0.96 (presumably lower bound of CI) | 0.96 (95% CI: 0.92 - 0.98) |
| Specificity ≥ 0.89 (presumably lower bound of CI) | 0.89 (95% CI: 0.83 - 0.93) |
| Processing Time (Secondary Endpoint) | 2.64 minutes (2.34-4.80 min) |
2. Sample Size and Data Provenance
- Test Set (Final Performance Validation): 306 CTPA cases.
- Data Provenance: The text does not explicitly state the country of origin. It mentions "multiple sites" for the development data. The study appears to be retrospective as it uses existing CTPA cases.
3. Number of Experts and their Qualifications for Ground Truth
- Number of Experts: 3 experts
- Qualifications: The document does not explicitly state the qualifications (e.g., radiologist with specific experience) of these experts.
4. Adjudication Method for the Test Set
- The ground truth was established by "3 experts using a 2:3 confirmation." This indicates a consensus-based approach where at least two out of three experts had to agree for a particular finding to be considered ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned. The study focused on the standalone performance of the AI algorithm for triage and notification, not on how human readers' performance improved with AI assistance.
6. Standalone Performance (Algorithm Only)
- Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The "Final device validation included standalone performance validation."
7. Type of Ground Truth Used
- The ground truth was established by expert consensus (3 experts using a 2:3 confirmation).
8. Sample Size for the Training Set
- Algorithm Development (Training and Development Validation): 600 CTPA cases (300 Positive, 300 Negative).
- Training Cases: 480 cases (240 Positive, 240 Negative).
- Additionally: An extra 276 negative cases were included to further assess specificity. It's not explicitly stated if these were solely for development validation or also incorporated into a later training phase, but their purpose was for model assessment.
9. How the Ground Truth for the Training Set Was Established
- The document implies that the ground truth for the 600 cases used in algorithm development (training and initial validation) was established in a similar manner to the final validation, but it doesn't explicitly detail the method (e.g., number of experts, adjudication) for the training set itself. Given the context of medical device development, it is highly probable that ground truth for the training set was also established by expert review, likely with a consensus process, to ensure high-quality labels for model training.
Ask a specific question about this device
(92 days)
Rapid LVO is a radiological computer aided triage and notification software indicated for use in the analysis of CTA head images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive Large Vessel Occlusion (LVO) findings in head CTA images.
Rapid LVO uses a software algorithm to analyze images and highlight cases with suspected LVO on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected LVO findings. Notifications include compressed preview images, that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of Rapid LVO are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage /prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Rapid LVO 1.0 is a clinical module which operates within the integrated Rapid Platform to provide triage and notification of suspected Large Vessel Occlusion (LVO). The Rapid LVO module consists of the core Rapid Platform software which provides the administration and services for the Rapid image processing modules; and the Rapid LVO module which functions as one of many image processing modules hosted by the platform.
Rapid LVO acquires (DICOM compliant) medical image data from CTA scanners through the Rapid Platform interface.
The Rapid platform is a software package that provides for the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography), CTA, XA and MRI (Magnetic Image Resonance), as an aid to physician diagnosis. Rapid can be installed on a customer's Server or it can be accessed online as virtual system. It provides viewing, quantification, analysis and reporting capabilities. The Rapid platform has multiple modules a clinician may elect to run and provide analysis for decision making. The basic architecture supports the general functionality to support the Rapid LVO imaging module such as DICOM interfaces, job management, data base functions and communications. The Rapid Platform and base functions are not under review for this submission.
The provided document (K200941) describes the 510(k) premarket notification for the iSchemaView Rapid LVO 1.0 device. Here's a breakdown of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for standalone performance were specified as exceeding an 80% goal for both Sensitivity (Se) and Specificity (Sp) using the lower bound of the 95% Confidence Interval. Additionally, a time-to-notification goal of less than 3.5 minutes was established based on the predicate device.
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Sensitivity (Se) > 80% (lower bound of 95% CI) | 0.970 (95% CI: 0.933, 0.987) |
| Specificity (Sp) > 80% (lower bound of 95% CI) | 0.956 (95% CI: 0.919, 0.977) |
| ROC AUC | 0.99 (95% CI: 0.972, 0.995) |
| Time to Notification < 3.5 minutes | 2.86 min (95% CI: 2.79, 2.92) |
| PPV (at 45% prevalence) | 0.95 (95% CI: 0.90, 0.97) |
| NPV (at 45% prevalence) | 0.98 (95% CI: 0.94, 0.99) |
As shown in the table, all reported performance metrics exceeded the specified acceptance criteria.
2. Sample Size Used for the Test Set and Data Provenance
The document mentions that iSchemaView performed "standalone performance in accordance with the 892.2080 special controls to show acceptance of the clinical performance of the Rapid LVO module." However, the exact sample size used for this test set is not explicitly stated in the provided text.
Regarding data provenance, the document does not specify the country of origin for the clinical data used in the performance validation. It only broadly states that the Rapid System performance "has been validated through the use of phantoms (Rapid core indications) and clinical data (Rapid LVO)." It is retrospective data, as it was used for validation after development, rather than prospectively collected for the clinical trial.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number of experts used to establish the ground truth for the test set or their specific qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth of the test set. It implies that the "clinical data" was used for validation, but the process of expert consensus or adjudication is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was NOT done according to the provided text. The study described is a "standalone performance" evaluation, meaning it assesses the algorithm's performance independent of human readers. Therefore, there is no information about how human readers improve with or without AI assistance in this document.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The document explicitly states: "iSchemaView performed standalone performance in accordance with the 892.2080 special controls to show acceptance of the clinical performance of the Rapid LVO module." The reported Sensitivity, Specificity, and ROC AUC are metrics of this standalone performance.
7. The Type of Ground Truth Used
The document refers to "clinical data" as the basis for the Rapid LVO performance validation. While it doesn't explicitly state "expert consensus" or "pathology" as the ground truth method, for radiological computer-aided triage software, the ground truth is typically established by expert consensus (e.g., multiple expert radiologists reviewing cases and reaching a consensus on the presence or absence of LVO) or by clinical outcomes data (e.g., confirmation of LVO through other gold standard diagnostics or surgical findings). Given the nature of the device for triage and notification, it is highly likely that the ground truth was established by expert consensus of radiologists or neurologists, possibly combined with clinical follow-up data to confirm true LVO. However, the document does not definitively specify which.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It only mentions that the device uses "a software algorithm to analyze images."
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It briefly mentions that the predicate device (Rapid ICH) uses "machine learning software implementations," while Rapid LVO uses "traditional algorithms," but it does not detail the training data or its annotation process for either.
Ask a specific question about this device
(94 days)
Rapid ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data. The Software automatically registers images and segments and analyzes ASPECTS Regions of Interest (ROIs). Rapid ASPECTS extracts image data for the ROI(s) to provide analysis and computer analytics based on morphological characteristics. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score. Rapid ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. This device provides information that may be useful in the characterization of early ischemic brain tissue injury during image interpretation (within 6 hours). Rapid ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment using the ASPECTS atlas definitions and atlas display including highlighted ROIs and numerical scoring.
Rapid ASPECTS provides an automatic ASPECT score based on the case input file for the physician. The score includes which ASPECT regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on other clinical factors the clinician may integrate though the Rapid Platform User Interface.
The ASPECTS software module processing pipeline performs four major tasks:
- Orientation and spatial normalization of the input imaging data (rigid registration/alignment with anatomical template);
- Delineation of pre-defined regions of interest on the normalized input data and computing numerical values characterizing underlying voxel values within those regions;
- Identification and highlighting previous/old stroke areas along with areas of early ischemic change; and
- Labeling of these delineated regions and providing a summary score reflecting the number of regions with early ischemic change as per ASPECTS guidelines.
Subsequently, the system notifies the physician of the ASPECT score which then requires the confirmation by the physician that a Large Vessel Occlusion (LVO) is detected. The ASPECTS information is then available for the physician to review and edit prior to pushing the data to a PACS or Workstation. The final summary score together with the regions selected and underlying voxel values are then sent to the Picture Archiving and Communication System (PACS) to become a part of the permanent patient medical record.
Here's a summary of the acceptance criteria and the study details for the Rapid ASPECTS device, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance document does not explicitly state pre-defined acceptance criteria in terms of specific performance metrics (e.g., minimum accuracy, sensitivity, specificity thresholds). Instead, the performance is demonstrated through a comparative effectiveness study showing improvement in human reader agreement.
| Acceptance Criteria Category | Specific Criteria (Implicitly from study goals) | Reported Device Performance (as stated in document) |
|---|---|---|
| Clinical Efficacy | Improvement in agreement with expert consensus read for ASPECTS scoring. | Readers (neurologists, radiologists, emergency medicine, neurocritical care specialists) significantly increased their agreement with an expert consensus read when using Rapid ASPECTS (P<0.0001). Readers agreed, on average, with almost ½ a region (0.425, 95% CI 0.11 - 0.74) more per scan with Rapid ASPECTS than without. Non-neuroradiologists improved their agreement from 73.6% to 79.8% with Rapid ASPECTS, which is comparable to the agreement achieved by expert neuroradiologist readers with each other. The software allows the non-expert physician to perform at the expert-like level. |
| Safety | Minimizing risks associated with incorrect scoring, misuse, and device failure. | Identified risks include incorrect scoring (false positive/negative), misuse (unintended patient population/incompatible hardware), and device failure. The document concludes that probable benefits outweigh probable risks, given general and special controls and application of mitigating measures. The device is unlikely to decrease diagnostic performance, and misuse risks are comparable to other radiological image processing devices. A gating condition of Large Vessel Occlusion (LVO) determination guides ASPECTS use, averting many stroke mimic confounding risks. |
| Technical Performance | Accurate representation of key processing parameters and adherence to design requirements/specifications. | Extensive performance validation testing, software verification, and validation testing demonstrated that the Rapid ASPECTS module provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations. The module met all design requirements and specifications. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample size for the test set: 50 cases. Each case had 10 regions scored independently.
- Data Provenance: Retrospective data from case data. The country of origin is not specified in the provided document.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of experts: Three experts.
- Qualifications of experts: The document refers to them as "expert neuroradiologist readers." Specific years of experience or other detailed qualifications are not provided.
4. Adjudication Method for the Test Set
- Adjudication method: "Data truthing was performed by three experts." This implies an expert consensus method, but the specific process (e.g., whether it was 2+1, 3+1, or another form of consensus) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of Improvement
- MRMC Comparative Effectiveness Study: Yes, an MRMC study was done, described as a "concurrent read, cross-over study design."
- Effect size of improvement:
- Readers (neurologists, radiologists, emergency medicine, neurocritical care specialists) significantly increased their agreement with an expert consensus read (p<0.0001).
- With Rapid ASPECTS, readers agreed, on average, with 0.425 more regions (95% CI 0.11 - 0.74) per scan than without Rapid ASPECTS.
- Non-neuroradiologists improved their level of agreement with experts from 73.6% to 79.8%.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The document implies that the primary performance evaluation was focused on the AI-assisted human reading. While the device "provides an automatic ASPECT score," the clinical study focuses on how humans improve with the device. The "Performance Data" section mentions "extensive performance validation testing and software verification and validation testing of the Rapid ASPECTS module both as standalone software and as integrated within the Rapid Platform," indicating that standalone testing for technical performance (e.g., accuracy against a "truth" ASPECTS score) was performed, but specific performance metrics for this standalone algorithm were not provided in this summary. The clinical efficacy, however, is reported in the context of human-in-the-loop.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
- Type of Ground Truth: Expert consensus. "Data truthing was performed by three experts" and the device performance was measured against "an expert consensus read."
8. The Sample Size for the Training Set
- The sample size for the training set is not explicitly stated in the provided text. The document refers to "historical training data" but does not give a number.
9. How the Ground Truth for the Training Set Was Established
- The document states that the "Rapid ASPECTS analytics calculates morphological characteristics of brain tissue using the historical training data." It further explains that "The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines." However, the specific method for establishing the ground truth for this "historical training data" (e.g., number of experts, their qualifications, adjudication method) is not detailed in this summary. It can be inferred that it likely also involved expert assessment based on ASPECTS guidelines.
Ask a specific question about this device
Page 1 of 1