Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K173224
    Device Name
    SPIN-SWI
    Manufacturer
    Date Cleared
    2018-02-23

    (143 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K100335

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SpinTech, Inc. SPIN-SWI application is intended for use in the post-acquisition image enhancement of MRI acquired 3D gradient-echo images of the brain. When used in combination with other clinical information, the SPIN-SWI application may aid the qualified radiologist with diagnosis by providing enhanced visualization of structures containing venous blood such as cerebral venous vasculature.

    Device Description

    The SPIN-SWI device includes a post-processing algorithm that enhances the contrast of tissues with different susceptibilities from 3D gradient-echo MRI images. The susceptibility of a biological tissue relates to the concentration of iron within it. which can be present in the form of deoxyhemoglobin, ferritin, hemosiderin, or other molecules. An MRI scan results in both magnitude and phase images. While magnitude is most commonly used clinically, the phase information can also be useful as it relates directly to the magnetic field. When tissues or objects of differing magnetic susceptibility are present, they perturb the field around them. This effect can be seen directly from phase images. While this perturbation already leads to signal loss in magnitude images, thus creating contrast, the phase information can still be used to enhance this contrast for local susceptibility changes. Enhancing this contrast allows us to visualize structures containing venous blood such as cerebral venous vasculature that may have not been visible prior to enhancement. Some technical challenges of SWI include eliminating the effects of unwanted background fields and choosing parameters to create optimal contrast. SPIN-SWI software works in conjunction with an FDA cleared third-party DICOM viewer as an image postprocessing solution in a PC workstation. The DICOM viewer (ORIS Visual) was FDA cleared on 4/29/2010 via K100335 and is used to transmit DICOM data and display the input and output images, the SPIN-SWI software application performs the SWI post-processing on 3D GRE input images to reconstruct the SWI output images.

    AI/ML Overview

    Here's an analysis of the provided text to extract information about the acceptance criteria and the study proving the device meets them:

    Disclaimer: The provided document is a 510(k) summary, which often focuses on demonstrating substantial equivalence to a predicate device rather than exhaustive clinical study details for novel technologies. Therefore, some information requested, particularly regarding detailed clinical study performance, may not be explicitly present or extensively elaborated upon.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a specific table of quantitative acceptance criteria and corresponding reported device performance for a clinical outcome study. Instead, it states that "All predefined acceptance criteria for the engineering All performance testing were met for all test cases across different imaging parameters, field strength and different subjects" (page 6). This refers to internal verification and validation of the software's technical performance.

    Similarly, for clinical validation, it states: "All predefined acceptance criteria for clinical validation testing, including clinical user needs testing, as a part of the SPIN-SWI performance validation testing efforts were met across all test cases. The results of the clinical validation related testing on the SPIN-SWI application demonstrates performed acceptable image quality and that all clinical user needs are met." (page 6).

    This indicates qualitative acceptance regarding image quality and user needs, but not specific quantitative metrics like sensitivity, specificity, or improvement in diagnostic accuracy. The primary goal of this submission is to show substantial equivalence to a predicate device based on similar technological characteristics and performance, rather than a quantifiable improvement over existing methods or fulfilling specific performance thresholds in a clinical trial.

    In summary, there is no discrete table of acceptance criteria and performance data as you might expect from a full clinical trial report. The acceptance criteria are broadly described as meeting "all performance testing" and "all clinical user needs."


    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated in terms of number of cases or patients. The document refers to testing "across different imaging parameters, field strength and different subjects" (for non-clinical testing) and "all test cases" for clinical validation.
    • Data Provenance: Not specified. It's not mentioned if the data was retrospective or prospective, nor the country of origin. Given the focus on substantial equivalence, it's likely pre-existing or simulated data was used for much of the non-clinical testing, and possibly a limited set of clinical cases for validation.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: The device's intended user is a "qualified radiologist" (page 2, 5). The "clinical user needs testing" suggests involvement of such experts, but their number, specific qualifications (e.g., years of experience, subspecialty), or their role in establishing ground truth are not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. Given the lack of detail on specific expert involvement and ground truth establishment (other than "clinical user needs testing"), no adjudication method (e.g., 2+1, 3+1) is mentioned.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a traditional MRMC comparative effectiveness study was not performed or detailed. The summary explicitly states: "The subject device of this premarket notification, SPIN-SWI application, did not require clinical studies to support substantial equivalence to the predicate device" (page 6).
    • Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was conducted.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? Yes, to some extent, as implied by the "Non-Clinical Testing Summary." The device is a "post-processing algorithm that enhances the contrast... When used in combination with other clinical information, the SPIN-SWI application may aid the qualified radiologist with diagnosis..." (page 4, 5).
      • The "non-clinical testing" and "performance testing (V&V)" seem to refer to the algorithm's output quality independently of human interpretation, focusing on whether it "produces results consistently according to its intended use" (page 6). However, specific metrics for standalone performance (e.g., sensitivity/specificity for detecting specific pathologies) are not provided, as its role is enhancement rather than direct diagnosis.

    7. Type of Ground Truth Used for the Test Set

    • Type of Ground Truth: Not explicitly stated. The "clinical validation testing" and "clinical user needs testing" suggest that the ground truth was based on the judgment of qualified radiologists regarding "acceptable image quality" and whether the enhanced visualization was beneficial. It's not specified if this involved pathology, outcomes data, or a strict expert consensus process for specific findings.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not specified. As a post-processing algorithm for image enhancement, it might rely on established imaging principles and signal processing, potentially requiring less "training" data in the machine learning sense compared to a deep learning diagnostic algorithm. Even if machine learning was involved, the training set size is not disclosed.

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Established: Not specified. Similar to the test set, if training data were used, the method for establishing their ground truth is not detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K122429
    Date Cleared
    2012-11-28

    (111 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K100335

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Autoplaque is intended to provide an optimized non-invasive application to analyze coronary anatomy and pathology and aid in determining treatment paths from a set of Computed Tomography (CT) Angiographic images.

    Autoplaque is a post processing application option for the ORS Visual platform (K100335). It is a non-invasive diagnostic reading software add-on intended for use by cardiologists and radiologists as an interactive tool for viewing and analyzing cardiac CT data for determining the presence and extent of coronary plaques.

    The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is data that is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    ORS Visual software (K100335) and the Autoplaque add-on must be installed on a suitable commercial computer platform. It is the user's responsibility to ensure the monitor quality and ambient light conditions are consistent with the clinical applications.

    Typical users of ORS Visual (K100335) and Autoplaque are trained medical professionals, including but not limited to radiologists, clinicians, technologists, and others.

    Vessel Analysis is intended to provide an optimized non-invasive application to analyze vascular anatomy and pathology and aid in determining treatment paths from a set of Computed Tomography (CT) Angiographic images.

    Vessel Analysis is a post processing application option for the ORS Visual (K100335) platform family of products and can be used in the analysis of 2D/3D CT Angiography images/data derived from DICOM 3.0 compliant CT scans for the purpose of cardiovascular and vascular disease assessment. This software is designed to support the physician in assessment of stenosis analysis, pre/post stent procedure and directional vessel tortuosity visualization.

    Vessel Analysis automatic visualization tools provide the users with the capabilities to facilitate segmentation of bony structures for accurate identification of the vessels. Once vessels are visualized, tools are available for sizing the vessel, analyzing calcified and non-calcified plaque to determine the densities of plaque within a coronary artery, measure areas of abnormalities within a vessel.

    The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is data that is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    ORS Visual software (K100335) and the Vessel Analysis add-on must be installed on a suitable commercial computer platform. It is the user's responsibility to ensure the monitor quality and ambient light conditions are consistent with the clinical applications. Typical users of Vessel Analysis and ORS Visual (K100335) are trained medical professionals, including but not limited to radiologists, clinicians, technologists, and others.

    Device Description

    The Autoplaque add-on for medical device ORS Visual (K100335) is a post processing analysis software package designed to assist Radiologists, Cardiologists, and other clinicians in the evaluation and assessment of coronary lesions.

    Autoplaque is a software post-processing package for the ORS Visual application (K100335). It provides analysis of the vessel lumen and wall and makes it easier to detect findings in the coronary vessels.

    The Autoplaque add-on has been extensively tested on a variety of platforms by both members of the development and quality control team and by potential customers serving as beta testers. A hazard analysis has been conducted and the level of concern has been classified as moderate. The release version of the software passed all tests considered critical in terms of patient safety and demonstrated an overall acceptable performance for release as determined by the predefined release criteria.

    The Vessel Analysis add-on (Vessel Analysis) for medical device ORS Visual (K100335) is a post processing analysis software package designed to assist Radiologists, Cardiologists, and other clinicians in the evaluation and assessment of vascular anatomy.

    Vessel Analysis is a software post-processing package for the ORS Visual application (K100335). It is an additional tool for the analysis of 2D/3D CT Angiographic images/data providing a number of display, measurements and batch filming/archive features to study user-selected vessels which include but are not limited to stenosis analysis, thrombus, pre/post stent procedures and directional vessel tortuosity visualization.

    The Vessel Analysis add-on has been extensively tested on a variety of platforms by both members of the development and quality control team and by potential customers serving as beta testers. A hazard analysis has been conducted and the level of concern has been classified as moderate. The release version of the software passed all tests considered critical in terms of patient safety and demonstrated an overall acceptable performance for release as determined by the predefined release criteria.

    AI/ML Overview

    The provided text describes two add-on software packages for the ORS Visual medical device: Autoplaque and Vessel Analysis. However, it does not contain information about acceptance criteria or specific studies demonstrating that the devices meet such criteria. It primarily focuses on device descriptions, intended use, and substantial equivalence to predicate devices, along with general statements about hazard analysis and testing.

    Therefore, many of your requested points cannot be answered from the provided text.

    Here is what can be inferred or explicitly stated based on the given information:

    Acceptance Criteria and Reported Device Performance

    The document states: "The release version of the software passed all tests considered critical in terms of patient safety and demonstrated an overall acceptable performance for release as determined by the predefined release criteria." (Sections 5.510(k) SUMMARY (AUTOPLAQUE) and 5.510(k) SUMMARY (VESSEL ANALYSIS)).

    However, the specific "predefined release criteria" or quantitative performance metrics are not detailed in the provided text. Therefore, a table of acceptance criteria and reported device performance cannot be generated.

    Study Information

    The document does not describe any specific clinical studies with quantitative results, sample sizes, or ground truth establishment relevant to demonstrating the device's performance against detailed acceptance criteria.

    1. Table of Acceptance Criteria and Reported Device Performance:

      • Acceptance Criteria: Not specified in the provided text.
      • Reported Device Performance: Not specified in the provided text.
    2. Sample size used for the test set and the data provenance:

      • Sample Size: Not specified.
      • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The document only mentions "extensively tested on a variety of platforms by both members of the development and quality control team and by potential customers serving as beta testers."
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified. The document does not describe a process for establishing ground truth on a test set.
    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not specified.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not explicitly mentioned or described. The document focuses on the software assisting radiologists and cardiologists but does not provide details of an MRMC study comparing human performance with and without AI assistance or any effect sizes.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Not explicitly mentioned or described. The device is described as an "interactive tool" designed to "assist Radiologists, Cardiologists, and other clinicians," implying human-in-the-loop use.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Not specified.
    8. The sample size for the training set:

      • Not specified.
    9. How the ground truth for the training set was established:

      • Not specified.

    Summary of what is present in the document:

    • Intended Use: Autoplaque aids in evaluating and assessing coronary lesions, determining the presence and extent of coronary plaques. Vessel Analysis aids in analyzing vascular anatomy and pathology, stenosis analysis, pre/post stent procedures, and directional vessel tortuosity visualization.
    • Technological Characteristics: Both are post-processing software packages for ORS Visual, compliant with DICOM 3.1, providing 2D/3D imaging, measurement tools, MIP, and MPR. Autoplaque specifically quantifies plaque burden, coronary remodeling, and characterizes lesions (calcified/non-calcified).
    • Substantial Equivalence: The devices are claimed to be substantially equivalent to other commercial products (Philips' CCA Plaque, Vitrea with SUREPlaque for Autoplaque; GE Advanced Vessel Analysis II, Vital Images Vitrea 4.0 for Vessel Analysis). Tables compare features, noting similarities ("same") to predicate devices.
    • Software Development and Risk: Both follow documented processes for software design and verification testing. A hazard analysis classified the "Level of Concern" as Moderate, stating no hardware/software failure in a properly configured environment would be expected to result in patient death or injury.
    • Testing: "Extensively tested on a variety of platforms by both members of the development and quality control team and by potential customers serving as beta testers."
    • Limitations/Warnings: Not intended to replace the skill/judgment of a qualified medical practitioner. Users should be trained. Awareness that interpolated data might appear as healthy tissue where pathology near/smaller than scanning resolution may be present.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1