Search Results
Found 2 results
510(k) Data Aggregation
(28 days)
CARDIUS-1, CARDIUS-2, CARDIUS-3, 2020TC SPECT IMAGING SYSTEM
Cardius-1, Cardius-2, Cardius-3:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
2020tc SPECT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuclear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020te Imager™ is intended to image the distribution of radionuctides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The changes to the Digirad 2020tc and Cardius SPECT imaging cameras involve addition of an Image Stabilization System. The proposed Image Stabilization System is used to correct image studies for patient motion in SPECT data acquired with Digirad Nuclear medicine gamma camera systems. The Image Stabilization System consists of two parts: a Hardware component that mounts to the SPECT Imaging System, and a software module that collects data from the hardware and corrects the image data for motion. The resulting motion corrected patient study data is referred to as the Image Stabilized Patient Study. The Image Stabilization system may operate only with the above described Digirad Camera models and is compatible with proprietary Digitad Acquisition Software under the Windows Operating system and standard PC architecture.
The proposed Image Stabilization System performs substantially the same function as the currently cleared Cedar's Sinai Motion Correction Program (MoCo), cleared for use on Digirad SPECT Imaging Systems under Digirad 510(k) #K023110.
The proposed Image Stabilization System automatically produces an Image Stabilized Patient Study, corrected for patient motion, which is available in the existing database. The original image study is produced in an identical manner as in the previously cleared devices. Both studies are stored in the same patient record in the database. Additional minor changes were made to the User Interface screen.
The Image Stabilized Patient Studies produced by the proposed device are identical in file structure to the original, unmodified data set; therefore SeeQuanta 1.2 and the Image Stabilization System are fully compatible with the same database, reconstruction software, and processing software that is used with the "cleared" devices. Hence, there are no changes to these software modules.
This proposed optional software addition will be available to Digirad customers both integrated with the Digirad 2020tc SPECT Imaging System, and Cardius-1, Cardius-2, and Cardius-3 SPECT Imaging Systems, and separately as a retrofit device for existing Digirad Product Customers.
Here's an analysis of the provided text regarding the KD5243D 510(k) submission, focusing on the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text focuses on the equivalence of the Image Stabilization System to existing predicate devices, rather than establishing new performance metrics. Therefore, explicit numerical acceptance criteria and a direct comparison table as might be seen for a diagnostic accuracy study are not present. Instead, the "acceptance criteria" are implied to be "similar quality" of corrected images.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Image Quality | The quality of the phantom images corrected with the Image Stabilization System with the modified acquisition software was similar to the quality of the images post-processed corrected using the MoCo Motion Correction program (a predicate device). |
Functionality | The Image Stabilization System performs substantially the same function as the currently cleared Cedar's Sinai Motion Correction Program (MoCo) and other predicate devices (Mirage software, Cedars-Sinai BPGS and MoCo). |
Design Outputs | Extensive Verification testing was completed on all cleared Digirad SPECT Imaging Systems integrated with the Image Stabilization Device to demonstrate that the design outputs met the design inputs of the proposed Image Stabilization Accessory Device. All software test results met pre-defined acceptance criteria (specific criteria not detailed). |
Data Compatibility | Image Stabilized Patient Studies are identical in file structure to the original, unmodified data set, ensuring full compatibility with existing database, reconstruction, and processing software. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The text states, "Testing was performed to analyze the content of corrected phantom studies using the Image Stabilization Accessory integrated with the Digirad Cardius-1 camera." It does not specify the number of phantom studies included in this analysis.
- Data Provenance: The testing was performed using "phantom studies," indicating simulated or controlled data rather than patient data. The country of origin for this data is not specified but is presumed to be internal testing by Digirad Corporation (USA). The study is prospective in the sense that the new device was used to correct specific phantom data, but the data itself is not from real patients.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the text. Since the testing involved phantom studies and comparison to a predicate device's output, it's unlikely that a panel of medical experts was used to establish ground truth in the traditional sense. The "ground truth" for phantom studies is typically defined by the known characteristics of the phantom and the expected ideal image.
4. Adjudication Method for the Test Set
This information is not provided. Given the nature of the testing (phantom studies comparing image quality to a predicate), a formal adjudication method like 2+1 or 3+1 is unlikely to have been employed. The comparison was likely a technical assessment of image characteristics.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on the technical performance and similarity of image quality for the motion correction system itself, not on the impact of this correction on human reader performance in interpreting patient studies.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was conducted. The text describes "Testing was performed to analyze the content of corrected phantom studies using the Image Stabilization Accessory" and "extensive Verification testing was completed on all cleared Digirad SPECT Imaging Systems integrated with the Image Stabilization Device to demonstrate that the design outputs met the design inputs." This indicates the algorithm's performance in correcting phantom images was evaluated independently.
7. The Type of Ground Truth Used
The ground truth used was based on phantom studies and comparison to the output of predicate device software (MoCo). For a phantom, the "ground truth" is the known ideal image of the phantom without motion artifacts. The acceptance criterion was that the corrected phantom images from the new system were "similar in quality" to those corrected by the predicate MoCo program.
8. The Sample Size for the Training Set
This information is not provided. The submission describes the addition of a new hardware and software module for image stabilization. It does not mention whether this module uses a machine learning algorithm that requires a training set. If it's a traditional image processing algorithm, a "training set" in the machine learning sense might not apply.
9. How the Ground Truth for the Training Set Was Established
Since a training set is not mentioned and it's unclear if machine learning was used, the method for establishing its ground truth is not applicable/not provided.
Ask a specific question about this device
(30 days)
CARDIUS-1, CARDIUS-2, CARDIUS-3, 2020TC SPECT IMAGING SYSTEM
Cardius-1, Cardius-2, Cardius-3:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
2020tc SPE.CT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuc ear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020te Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The changes to the Cardius and 2020tc cameras involve modifications to the data acquisition software used on the gamma cameras. The primary change to the data acquisition software involves the addition of a Camera Center-of-Rotation (COR) quantitative check. Additional minor changes were made to the User Interface screen. There were no hardware changes to the cameras.
The provided text describes modifications to the software of existing SPECT imaging systems (Cardius and 2020tc cameras). The primary change involved adding a Camera Center-of-Rotation (COR) quantitative check and minor user interface adjustments. The core functionality and intended use of the devices remained unchanged.
Here's an analysis of the acceptance criteria and study information, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Pre-defined acceptance criteria for software test results | All software test results met pre-defined acceptance criteria. |
Quality of clinical images with modified software | The quality of the clinical images produced with the modified software was similar to the quality of the images produced with the unmodified software. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "clinical imaging with modified and unmodified software." However, it does not specify the sample size used for this clinical imaging (the test set) or the data provenance (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not specify the number of experts used or their qualifications for establishing ground truth, as it focuses on software verification and clinical image quality comparison.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method. It implies a comparison of image quality, but the process of this comparison (e.g., blinded review, consensus) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done or reported. The study focused on demonstrating that the revised software did not degrade image quality or system performance compared to the previous version. It does not assess human reader improvement with AI assistance, as the changes are to the acquisition software, not an AI-assisted interpretation tool.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) was Done
The primary change was the addition of a "Camera Center-of-Rotation (COR) quantitative check" and minor UI changes. This appears to be an internal technical algorithm within the acquisition software, not an independently evaluated "standalone" diagnostic algorithm. The testing described is more aligned with software verification and validation, ensuring the system's technical function, rather than diagnostic accuracy as a standalone AI. So, based on the information provided, a standalone study in the context of diagnostic AI performance was not done or reported.
7. The Type of Ground Truth Used
The most relevant "ground truth" implicitly used in this context would be the performance of the unmodified software/system as the benchmark for comparison. The goal was to demonstrate that the modified software produced "similar" quality images and met pre-defined technical acceptance criteria. There's no mention of external clinical ground truth like pathology or patient outcomes.
8. The Sample Size for the Training Set
This document describes software updates to an existing medical imaging system. It does not mention a training set in the context of machine learning. The changes are to data acquisition software, not an AI model that would typically require a training set.
9. How the Ground Truth for the Training Set was Established
As no training set is mentioned (see point 8), the method for establishing its ground truth is not applicable.
Ask a specific question about this device
Page 1 of 1