Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K133135
    Date Cleared
    2014-03-07

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K022292, K043441

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Volume Viewing software is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital medical image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Web-browser access is available for review purposes. It should not be used to arrive at a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version includes automated removal of bone-like structures, stenosis measurement and web-browser access.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes verification and validation testing, with acceptance criteria for various functionalities. The reported performance consistently "met acceptance criteria" for all described tests. Specific quantitative performance values are generally not provided, instead emphasizing meeting the established criteria for equivalence and usability.

    Functionality TestedAcceptance CriteriaReported Device Performance
    Measurement AlgorithmsIdentical output to predicates (Volume Viewing 2.0 (K111638) and Registration and Fusion (K080013)); measurements within +/- scanner resolution.Results met the established acceptance criteria.
    Crosshair PositioningWithin half a voxel (for rounding differences across graphic video cards).Results met the established acceptance criteria.
    Centerline Computation/Vessel Visualization (Contrast-filled vessels in CT angiography)Adequate tracing and visualization (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Stenosis MeasurementUser can determine the amount of stenosis (via side-by-side comparison with Voxar 3D predicate).Results met acceptance criteria.
    Bone-like Structure Removal (CT angiography of thorax, abdomen, pelvis, upper/lower extremities)Adequate removal from view (via side-by-side comparison with Voxar predicate).Results met acceptance criteria.
    Volume Measurements (Manual/Semi-automated)User can perform measurements in a user-friendly and intuitive way.Results met acceptance criteria.
    Image Quality (2D and 3D Rendering)Adequate (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Web-browser component (XERO Clinical Applications 1.0) for non-diagnostic reviewUsability of features and functionalities for non-diagnostic review of CT and MR data sets using 3D and multi planar reconstructions.Validation successfully completed; scored as acceptable.
    Stereoscopic 3D ViewingEquivalent to "regular" 3D viewing, no distinct medical or clinical benefit over "regular" 3D viewing.Concluded that both viewing methods are equivalent.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Validation (Clinical Studies): 154 anonymized clinical studies.
    • Sample Size for Web-browser Component Validation: 42 anonymized clinical data sets.
    • Data Provenance: The anonymized clinical studies were used for validation in test labs and hospitals in the United Kingdom, the United States, Ireland, and Belgium. The document doesn't specify if the data originated from these countries or elsewhere, only where the validation was performed. The data is retrospective, as it consists of "anonymized clinical studies."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Main Validation: 29 medical professionals participated. Their specific qualifications (e.g., years of experience, subspecialty) are not explicitly stated, beyond being "medical professionals."
    • Web-browser Component Validation: 11 medical professionals participated. Their specific qualifications are also not explicitly stated.
    • Stereoscopic 3D Viewing Concept Tests: 6 medical professionals from 4 different hospitals in Belgium and the Netherlands. Their specific qualifications are not explicitly stated.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe a formal adjudication method (like 2+1 or 3+1 consensus) for the test sets. Instead, it mentions that a "scoring scale was implemented and acceptance criteria established" for the main validation. For the web-browser component, "examiners focused on the usability of features and functionalities." For the stereoscopic 3D viewing, "Tests by medical professionals showed" and "The tests concluded." This suggests a consensus-based approach or individual assessments against established scoring scales rather than a formal arbitration process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study explicitly designed to measure the effect size of human readers improving with AI vs without AI assistance was done.

    The validation involved side-by-side comparisons with predicate devices, where medical professionals evaluated whether the new device's functionality (e.g., centerline computation, stenosis measurement, bone removal, image quality) was "adequate" or allowed the user to "determine the amount of stenosis" comparable to the predicates. This is more of a non-inferiority or equivalence assessment against predicate functionality, rather than an explicit measure of human reader performance improvement with the new device (AI assistance, in this context) versus without it.

    The testing of stereoscopic 3D viewing concluded "no specific medical or clinical benefits to using the stereoscopic 3D view" over the "regular" 3D view, indicating no improvement for human readers in that specific aspect.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    Yes, standalone performance was implicitly tested, particularly for "measurement algorithms" and "crosshair positioning."

    • Measurement Accuracy: Regression testing assured "the different measurement algorithms still provide the same output as the predicates." Testers "made identical measurements of diameters, areas and volumes and compared those against reference values."
    • Crosshair Positioning: Tests verified "whether viewports link to the same location in every dataset."

    While human testers initiated these measurements and observations, the assessment was of the algorithm's output against a defined reference or predicate, rather than human diagnostic performance with the algorithm's output.

    7. Type of Ground Truth Used

    • Reference Values / Predicate Comparisons: For measurement accuracy tests (diameters, areas, volumes), the ground truth was established by "reference values" and comparison against "equivalent" measurements from predicate devices (Voxar 3D Enterprise for vessel measurements, Mirage 5.5 for semi-automatic region growing volumes).
    • Expert Consensus/Qualitative Assessment: For many validation objectives (e.g., adequacy of centerline tracing, vessel visualization, bone removal, image quality, usability of volume measurements), the ground truth was essentially a qualitative assessment by medical professionals against established scoring scales and side-by-side comparisons with predicate devices. For stereoscopic 3D viewing, "Concept tests involving 6 medical professionals... were asked to score" and "The tests concluded."
    • Technical Specifications: For crosshair positioning, the ground truth was defined by technical specifications (half a voxel).

    8. Sample Size for the Training Set

    The document does not provide any information about the sample size used for a training set. The testing described focuses on verification and validation of specific functionalities in comparison to predicates or against predefined criteria. This product appears to be a PACS accessory with advanced viewing and manipulation tools, and while it involves "automated" features (like bone removal), the process description suggests a rule-based or algorithmic approach rather than a machine learning model that would typically require a distinct training set. If machine learning was used, the training data information is absent.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied for machine learning, there is no information on how its ground truth would have been established. The ground truth described in the document pertains to the validation and verification sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K072191
    Date Cleared
    2007-10-25

    (80 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K043441

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ClearVision nuclear medicine imaging system is intended for use as a diagnostic imaging device to acquire and process gated and non-gated Single Photon Emission Computed Tomography (SPECT) images.

    Used with appropriate radiopharmaceuticals, the ClearVision system produces images that depict the anatomical distribution of radioisotopes within the myocardium.

    Device Description

    The ClearVision Nuclear Medicine Imaging System acquires and processes cardiac data including gated and non-gated Single Photon Emission Computed Tomography (SPECT) studies. After completion of an acquisition, the operator can select the resulting acquisition data file to generate both qualitative and quantitative results for review by a physician. This includes processing using Release 5.6 of Segami Corporation's Mirage processing software that was previously cleared under 510(k) number K043441 dated 13-January-2005.

    The acquisition system consists of either a single or dual small field-of-view detectors with each mounted on top of a tower that contains system electronics. To support the acquisition of SPECT data, the patient chair rotates up to 360 degrees in either clockwise or counterclockwise direction.

    Prior to a patient scan, the following system features are used to ensure the myocardium is centered within each detector's field of view (FOV):

    • . Each tower can be moved horizontally along rails mounted to the floor plate.
    • . The patient chair seat pan can be moved side-to-side.
    • Vertical and a horizontal beam lasers are mounted to side of detector.

    The ClearVision system's compact footprint and small FOV detector are specifically designed for placement in a facility lacking adequate floor space for a typical nuclear medicine imaging system.

    AI/ML Overview

    The provided document is a 510(k) summary for the GVI Medical Devices ClearVision Nuclear Imaging System. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study with acceptance criteria and performance metrics for the ClearVision system itself.

    Therefore, much of the requested information regarding acceptance criteria, specific study designs, sample sizes, expert qualifications, and ground truth establishment is not available in this document. The document describes a comparison of features and performance characteristics to a predicate device (Digirad Cardius 1 XPO and Cardius 2 XPO SPECT Imaging Systems, K070542) to establish substantial equivalence.

    Here's a breakdown of what can be extracted and what is not available based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" but rather presents a "Feature Comparison Summary" to demonstrate substantial equivalence to the predicate device. The performance characteristics listed are NEMA (National Electrical Manufacturers Association) standards, which are common for SPECT systems and implicitly serve as performance benchmarks.

    FeatureAcceptance Criteria (Predicate)Reported ClearVision PerformanceDoes it meet acceptance criteria?
    NEMA Reconstructed Spatial Resolution11.00 mm (for predicate)9.8 mm (central), 7.6 mm (tangential), 8.4 mm (radial)The ClearVision's spatial resolution values (smaller numbers indicate better resolution) are superior to the predicate's 11.00 mm, indicating it meets or exceeds this aspect.
    NEMA System Sensitivity160 cpm / uci (for predicate)147 cpm / uciClearVision's sensitivity is slightly lower than the predicate, but this is presented in the context of substantial equivalence, implying it is within an acceptable range for the intended use given other features. The document explicitly states "performs as well as the predicate".
    NEMA Energy Resolution
    Ask a Question

    Ask a specific question about this device

    K Number
    K070542
    Manufacturer
    Date Cleared
    2007-03-23

    (25 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K043441

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cardius 1 XPO, Cardius 2 XPO, Cardius 3 XPO Imaging Systems:

    The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies. in nuclear medicine applications.

    2020tc SPECT Imaging System:

    The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuclear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.

    Specifically, the 2020tc Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.

    Device Description

    Mirage XP is an automated processing and interpretation software package. This software will be available as standard software on the Digirad imaging systems and/or as a standalone software package on a workstation or a laptop. The enhancements to previous versions of software include automated processing, preference based selections, improved EF algorithm and segment scoring for quantification and interpretation.

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device (SPECT Imaging System with new software). It describes the changes to the device (software update), its intended use, and the conclusion from testing. However, it does not include specific acceptance criteria, detailed study results, or information about ground truth establishment, expert adjudication, or training/test set sample sizes as requested in the prompt.

    The core conclusion from the provided text is that the new software functions as intended and produces images equivalent to previous versions, without impacting safety or effectiveness. This is a general statement rather than a detailed performance report against specific criteria.

    Therefore, I cannot populate the table and answer all questions based on the provided text alone. I will indicate where information is Not Available (N/A) from the text.


    Acceptance Criteria and Device Performance Study Summary

    The provided 510(k) summary focuses on a software update (Mirage XP, equivalent to Segami's Mirage 5.6) for existing Digirad SPECT imaging systems. The study detailed in the summary is primarily a functional verification and equivalence assessment rather than a detailed performance study against numerical acceptance criteria.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Criteria (as implied or stated)Reported Device Performance
    Functional EquivalenceThe new Mirage XP software can be installed and functions as intended on Digirad imaging systems.Confirmed. Software installs and functions as intended.
    SafetyUse of Mirage XP software (with existing acquisition/reconstruction) does not result in known anomalies impacting safety.No known anomalies impacting safety.
    EffectivenessUse of Mirage XP software (with existing acquisition/reconstruction) does not result in known anomalies impacting effectiveness, including operator usage and human factors.No known anomalies impacting effectiveness, including operator usage and human factors.
    Image QualityQuality of images produced with Mirage XP software is equivalent to those seen in previous versions of Mirage software used on Digirad imaging systems.Image quality is equivalent to previous versions of Mirage software.
    Automated ProcessingSpecific performance criteria for enhancements (e.g., speed, accuracy of segmentation, EF algorithm improvements) are not specified.Enhancements include automated processing, preference-based selections, improved EF algorithm, and segment scoring for quantification and interpretation. No specific performance metrics or acceptance criteria for these improvements are provided in this summary.

    2. Sample size used for the test set and the data provenance

    • Sample Size: Not Available (N/A). The document states "Testing was done," but does not specify the number of cases, scans, or an exact test set size.
    • Data Provenance: Not Available (N/A). The document does not mention the country of origin of the data or whether the data was retrospective or prospective. It only implies that the testing was likely conducted internally by Digirad Corporation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not Available (N/A).
    • Qualifications of Experts: Not Available (N/A).
    • The comparison for image quality ("equivalent to those seen in previous versions") suggests an internal subjective assessment, but no details on expert involvement are provided.

    4. Adjudication method for the test set

    • Adjudication Method: Not Available (N/A). No information on adjudication methods for the test set (e.g., 2+1, 3+1, none) is provided.

    5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not done or reported in this summary. The device in question is software for SPECT imaging system processing and interpretation, which includes "automated processing, preference based selections, improved EF algorithm and segment scoring for quantification and interpretation," but it's not explicitly described as an "AI assistance" device in the context of improving human reader performance.
    • Effect Size: Not Available (N/A). Since an MRMC study was not reported, there's no information on the effect size of human reader improvement with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, implicitly. The software "Mirage XP is an automated processing and interpretation software package." The statement that "Mirage XP software can be installed and functions as intended" and produces "equivalent" images suggests that the algorithm's standalone functional and image output capabilities were assessed. However, no specific standalone performance metrics (e.g., detection sensitivity/specificity for particular conditions) were provided, as the focus was on functional equivalence to previous versions.

    7. The type of ground truth used

    • Type of Ground Truth: Not explicitly stated. The comparison is primarily against the performance of previous versions of the software and images, implying a "referent standard" of previously accepted image quality and processing outputs. It does not mention an independent ground truth like pathology, outcomes data, or a consensus of multiple expert interpretations for clinical findings.

    8. The sample size for the training set

    • Sample Size for Training Set: Not Available (N/A). This document is about a new release of pre-existing software. Information on the training set used during the development of Mirage 5.6 (or Mirage XP) is not included in this 510(k) summary.

    9. How the ground truth for the training set was established

    • Ground Truth Establishment for Training Set: Not Available (N/A). As the training set size and details are not provided, neither is information on how its ground truth was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1