Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K232156
    Manufacturer
    Date Cleared
    2024-01-19

    (183 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data. The Software automatically registers images and segments and analyzes ASPECTS Regions of Interest (ROIs). Rapid ASPECTS extracts image data for the ROI(s) to provide analysis and computer analytics based on morphological characteristics. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score. Rapid ASPECTS is indicated for evaluation of adult patients presenting for diagnostic imaging workup, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. This device provides information that may be useful in the characterization of early ischemic brain tissue injury for ischemic stroke patient (typically < 24 hours since last known well) during image interpretation following the standard of care. Rapid ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment using the ASPECTS atlas definitions and atlas display including highlighted ROIs and numerical scoring. Rapid ASPECTS presents the original and annotated images for concurrent reads.

    Device Description

    The Rapid platform is Software as a Medical Device (SaMD), which provides for the visualization and study of changes in tissue and vasculature using digital images captured by diagnostic imaging systems including CT (Computed Tomography), CTA (CT Angiography), MRI (Magnetic Resonance Imaging) and MRA (MR Angiography) as an aid to physician diagnosis. Rapid can be installed on a customer's Server or it can be accessed online as a virtual system. It provides viewing, quantification, analysis, and reporting capabilities. The Rapid platform has multiple modules a clinician may elect to run and provide analysis for decision making.

    Rapid ASPECTS provides an automatic ASPECT Score based on the case input file for the physician. The score includes which ASPECT regions are identified based on regional imaging features derived from Non-Contrast Computed Tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on other clinical factors the clinician may integrate though the Rapid Platform Interface.

    The ASPECTS software module processing pipeline performs four major tasks:

    • Orientation and spatial normalization of the input imaging data (rigid registration/alignment with anatomical template).
    • Delineation of pre-defined regions of interest on the normalized input data and computing numerical values characterizing underlying voxel values within those regions.
    • Identification and highlighting previous/old stroke areas along with areas of early ischemic change; and
    • Labeling of these delineated regions and providing a summary score reflecting the number of regions with early ischemic change as per ASPECTS guidelines.

    Subsequently. the system notifies the physician of the availability of the ASPECT Score with an overlayed atlas. The ASPECTS information is then available for the physician to review and edit prior to sending the data to a PACS or Workstation. The final summary score together with the regions selected and underlying voxel values are then sent to the Picture Archiving and Communication System (PACS) to become a part of the permanent patient medical record.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for iSchemaView, Inc.'s Rapid ASPECTS (v3) CADx software.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria for Rapid ASPECTS (v3)

    CriterionReported Device Performance (Rapid ASPECTS v3)
    Standalone Performance: Percent agreement of Rapid ASPECTS to the reference at the ASPECTS region level.82.8%
    Standalone Performance: Percent agreement of Rapid ASPECTS to the reference at the scan level.82.8% (comparable, with overlapping CI, to pairwise agreement between any two of the three experts)
    Clinical Validation Reader Improvement: Demonstrate that reader scoring of the 10 ASPECT regions is more closely aligned with the reference standard when read in conjunction with Rapid ASPECTS than without Rapid ASPECTS.The fixed effect of the Rapid assist increases the percent agreement on average by about 0.02. Agreement increases from 82% without assistance to 84% with assistance (excluding the expert). The average agreement increases from 80.4% without assistance to 83.3% with assistance. A statistically significant improvement in the accuracy of the 6 readers' scores was demonstrated when scoring was performed with Rapid ASPECTS output. Most substantial benefit for non-neuroradiologist expert readers. No significant impact (positive or negative) on the expert neuroradiologist's score was observed.
    Supplemental Confounder/Mimic Sensitivity Assessment: Assess impact of confounders/mimics.Only 3 out of 115 reads (2.6%) changed based on Rapid results, showing minimal effect of confounders/mimics on ASPECTS performance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Standalone Performance Test Set Sample Size: 88 scans (from the "Suspected Stroke" category)
    • Reader Improvement Test Set Sample Size: 102 scans (including 88 "Suspected Stroke" and 14 "Stroke Mimic" cases)
    • Supplemental Confounder/Mimic Sensitivity Assessment Sample Size: This involved a separate set of supplemental data. While the number of scans directly used for this specific assessment is not explicitly stated as a single total, the types and counts of cases are listed: Abscess (3), Dural AVF (4), Hydrocephalus (4), Hypertensive Encephalopathy (2), Isodense SDH (4), Multiple Sclerosis (3), and Traumatic Brain Injury (3). These cases were reviewed for 115 reads.
    • Data Provenance: The data included both US (79.41% for the reader improvement study test set) and OUS (20.59%) cases. It's a combination of different scanner manufacturers: GE (23), Siemens (28), Cannon/Toshiba (22), and Philips (29). The description suggests it is retrospective data, as it describes a collection of existing scans.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The ground truth for both the standalone performance and the reader improvement study was established using:

    • Three experts to establish the reference standard for the standalone performance study.
    • The clinical reader study involved one expert neuroradiologist and five non-expert typical readers. While the specific qualifications for "typical readers" aren't detailed, the text implies they represent general clinicians who evaluate CT scans in community hospitals and primary stroke centers. The neuroradiologist is explicitly identified as an expert.

    4. Adjudication Method for the Test Set

    The document explicitly states: "The primary reader improvement endpoint is to demonstrate that reader scoring of the 10 ASPECT regions is more closely aligned with the reference standard when read in conjunction with Rapid ASPECTS than without Rapid ASPECTS." And for standalone performance: "The percent agreement of Rapid ASPECTS to the reference at the ASPECTS region level and at the scan level is 82.8%. Both are comparable (overlapping CI) to the pairwise agreement between any two of the three experts."

    This indicates that a reference standard was established by experts. While the specific method of reaching this reference standard (e.g., 2+1, consensus) is not explicitly detailed, the mention of "pairwise agreement between any two of the three experts" for the standalone performance suggests that the ground truth was derived from a consensus or adjudicated process involving these three experts.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    Yes, a multi-reader multi-case (MRMC) comparative effectiveness study was done. This is referred to as the "Clinical Validation Reader Improvement" study.

    • Effect Size of Human Readers Improve with AI vs. without AI assistance: The fixed effect of the Rapid assist increases the percent agreement on average by about 0.02. Specifically, agreement increases from 82% without assistance to 84% with assistance (excluding the expert). When including the non-expert readers, the average agreement increases from 80.4% without assistance to 83.3% with assistance.
      • The benefit was most substantial among the non-neuroradiologist expert readers.
      • The system allowed non-expert physicians to perform at an "expert-like level."
      • There was no significant impact (positive or negative) on the score of the expert neuroradiologist.

    6. Standalone Performance (i.e., algorithm only without human-in-the-loop performance)

    Yes, a standalone performance study was done.

    • Results: The percent agreement of Rapid ASPECTS to the reference at both the ASPECTS region level and at the scan level was reported as 82.8%. This was found to be comparable (with overlapping confidence intervals) to the pairwise agreement between any two of the three experts who established the ground truth.

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus / expert reading. It was established by a panel of experts. The text refers to "the reference" established by "three experts" for the standalone performance and a "reference standard" for the reader improvement study.

    8. The Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. It only describes the test sets used for validation.

    9. How the Ground Truth for the Training Set Was Established

    As the training set sample size is not provided, the method for establishing its ground truth is also not specified in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200760
    Device Name
    Rapid ASPECTS
    Manufacturer
    Date Cleared
    2020-06-26

    (94 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid ASPECTS is a computer-aided diagnosis (CADx) software device used to assist the clinician in the assessment and characterization of brain tissue abnormalities using CT image data. The Software automatically registers images and segments and analyzes ASPECTS Regions of Interest (ROIs). Rapid ASPECTS extracts image data for the ROI(s) to provide analysis and computer analytics based on morphological characteristics. The imaging features are then synthesized by an artificial intelligence algorithm into a single ASPECT (Alberta Stroke Program Early CT) Score. Rapid ASPECTS is indicated for evaluation of patients presenting for diagnostic imaging workup with known MCA or ICA occlusion, for evaluation of extent of disease. Extent of disease refers to the number of ASPECTS regions affected which is reflected in the total score. This device provides information that may be useful in the characterization of early ischemic brain tissue injury during image interpretation (within 6 hours). Rapid ASPECTS provides a comparative analysis to the ASPECTS standard of care radiologist assessment using the ASPECTS atlas definitions and atlas display including highlighted ROIs and numerical scoring.

    Device Description

    Rapid ASPECTS provides an automatic ASPECT score based on the case input file for the physician. The score includes which ASPECT regions are identified based on regional imaging features derived from non-contrast computed tomography (NCCT) brain image data. The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines and provided to the clinician for review and verification. At the discretion of the clinician, the scores may be adjusted based on other clinical factors the clinician may integrate though the Rapid Platform User Interface.

    The ASPECTS software module processing pipeline performs four major tasks:

    • Orientation and spatial normalization of the input imaging data (rigid registration/alignment with anatomical template);
    • Delineation of pre-defined regions of interest on the normalized input data and computing numerical values characterizing underlying voxel values within those regions;
    • Identification and highlighting previous/old stroke areas along with areas of early ischemic change; and
    • Labeling of these delineated regions and providing a summary score reflecting the number of regions with early ischemic change as per ASPECTS guidelines.

    Subsequently, the system notifies the physician of the ASPECT score which then requires the confirmation by the physician that a Large Vessel Occlusion (LVO) is detected. The ASPECTS information is then available for the physician to review and edit prior to pushing the data to a PACS or Workstation. The final summary score together with the regions selected and underlying voxel values are then sent to the Picture Archiving and Communication System (PACS) to become a part of the permanent patient medical record.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Rapid ASPECTS device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA clearance document does not explicitly state pre-defined acceptance criteria in terms of specific performance metrics (e.g., minimum accuracy, sensitivity, specificity thresholds). Instead, the performance is demonstrated through a comparative effectiveness study showing improvement in human reader agreement.

    Acceptance Criteria CategorySpecific Criteria (Implicitly from study goals)Reported Device Performance (as stated in document)
    Clinical EfficacyImprovement in agreement with expert consensus read for ASPECTS scoring.Readers (neurologists, radiologists, emergency medicine, neurocritical care specialists) significantly increased their agreement with an expert consensus read when using Rapid ASPECTS (P<0.0001). Readers agreed, on average, with almost ½ a region (0.425, 95% CI 0.11 - 0.74) more per scan with Rapid ASPECTS than without. Non-neuroradiologists improved their agreement from 73.6% to 79.8% with Rapid ASPECTS, which is comparable to the agreement achieved by expert neuroradiologist readers with each other. The software allows the non-expert physician to perform at the expert-like level.
    SafetyMinimizing risks associated with incorrect scoring, misuse, and device failure.Identified risks include incorrect scoring (false positive/negative), misuse (unintended patient population/incompatible hardware), and device failure. The document concludes that probable benefits outweigh probable risks, given general and special controls and application of mitigating measures. The device is unlikely to decrease diagnostic performance, and misuse risks are comparable to other radiological image processing devices. A gating condition of Large Vessel Occlusion (LVO) determination guides ASPECTS use, averting many stroke mimic confounding risks.
    Technical PerformanceAccurate representation of key processing parameters and adherence to design requirements/specifications.Extensive performance validation testing, software verification, and validation testing demonstrated that the Rapid ASPECTS module provides accurate representation of key processing parameters under a range of clinically relevant parameters and perturbations. The module met all design requirements and specifications.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample size for the test set: 50 cases. Each case had 10 regions scored independently.
    • Data Provenance: Retrospective data from case data. The country of origin is not specified in the provided document.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of experts: Three experts.
    • Qualifications of experts: The document refers to them as "expert neuroradiologist readers." Specific years of experience or other detailed qualifications are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication method: "Data truthing was performed by three experts." This implies an expert consensus method, but the specific process (e.g., whether it was 2+1, 3+1, or another form of consensus) is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of Improvement

    • MRMC Comparative Effectiveness Study: Yes, an MRMC study was done, described as a "concurrent read, cross-over study design."
    • Effect size of improvement:
      • Readers (neurologists, radiologists, emergency medicine, neurocritical care specialists) significantly increased their agreement with an expert consensus read (p<0.0001).
      • With Rapid ASPECTS, readers agreed, on average, with 0.425 more regions (95% CI 0.11 - 0.74) per scan than without Rapid ASPECTS.
      • Non-neuroradiologists improved their level of agreement with experts from 73.6% to 79.8%.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The document implies that the primary performance evaluation was focused on the AI-assisted human reading. While the device "provides an automatic ASPECT score," the clinical study focuses on how humans improve with the device. The "Performance Data" section mentions "extensive performance validation testing and software verification and validation testing of the Rapid ASPECTS module both as standalone software and as integrated within the Rapid Platform," indicating that standalone testing for technical performance (e.g., accuracy against a "truth" ASPECTS score) was performed, but specific performance metrics for this standalone algorithm were not provided in this summary. The clinical efficacy, however, is reported in the context of human-in-the-loop.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    • Type of Ground Truth: Expert consensus. "Data truthing was performed by three experts" and the device performance was measured against "an expert consensus read."

    8. The Sample Size for the Training Set

    • The sample size for the training set is not explicitly stated in the provided text. The document refers to "historical training data" but does not give a number.

    9. How the Ground Truth for the Training Set Was Established

    • The document states that the "Rapid ASPECTS analytics calculates morphological characteristics of brain tissue using the historical training data." It further explains that "The results are generated based on the Alberta Stroke Program Early CT Score (ASPECTS) guidelines." However, the specific method for establishing the ground truth for this "historical training data" (e.g., number of experts, their qualifications, adjudication method) is not detailed in this summary. It can be inferred that it likely also involved expert assessment based on ASPECTS guidelines.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1