Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K051549
    Manufacturer
    Date Cleared
    2005-07-13

    (30 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cardius-1, Cardius-2, Cardius-3:

    The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.

    2020tc SPE.CT Imaging System:

    The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuc ear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.

    Specifically, the 2020te Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.

    Device Description

    The changes to the Cardius and 2020tc cameras involve modifications to the data acquisition software used on the gamma cameras. The primary change to the data acquisition software involves the addition of a Camera Center-of-Rotation (COR) quantitative check. Additional minor changes were made to the User Interface screen. There were no hardware changes to the cameras.

    AI/ML Overview

    The provided text describes modifications to the software of existing SPECT imaging systems (Cardius and 2020tc cameras). The primary change involved adding a Camera Center-of-Rotation (COR) quantitative check and minor user interface adjustments. The core functionality and intended use of the devices remained unchanged.

    Here's an analysis of the acceptance criteria and study information, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Pre-defined acceptance criteria for software test resultsAll software test results met pre-defined acceptance criteria.
    Quality of clinical images with modified softwareThe quality of the clinical images produced with the modified software was similar to the quality of the images produced with the unmodified software.

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "clinical imaging with modified and unmodified software." However, it does not specify the sample size used for this clinical imaging (the test set) or the data provenance (e.g., country of origin, retrospective or prospective).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not specify the number of experts used or their qualifications for establishing ground truth, as it focuses on software verification and clinical image quality comparison.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method. It implies a comparison of image quality, but the process of this comparison (e.g., blinded review, consensus) is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, an MRMC comparative effectiveness study was not done or reported. The study focused on demonstrating that the revised software did not degrade image quality or system performance compared to the previous version. It does not assess human reader improvement with AI assistance, as the changes are to the acquisition software, not an AI-assisted interpretation tool.

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) was Done

    The primary change was the addition of a "Camera Center-of-Rotation (COR) quantitative check" and minor UI changes. This appears to be an internal technical algorithm within the acquisition software, not an independently evaluated "standalone" diagnostic algorithm. The testing described is more aligned with software verification and validation, ensuring the system's technical function, rather than diagnostic accuracy as a standalone AI. So, based on the information provided, a standalone study in the context of diagnostic AI performance was not done or reported.

    7. The Type of Ground Truth Used

    The most relevant "ground truth" implicitly used in this context would be the performance of the unmodified software/system as the benchmark for comparison. The goal was to demonstrate that the modified software produced "similar" quality images and met pre-defined technical acceptance criteria. There's no mention of external clinical ground truth like pathology or patient outcomes.

    8. The Sample Size for the Training Set

    This document describes software updates to an existing medical imaging system. It does not mention a training set in the context of machine learning. The changes are to data acquisition software, not an AI model that would typically require a training set.

    9. How the Ground Truth for the Training Set was Established

    As no training set is mentioned (see point 8), the method for establishing its ground truth is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1