Search Results
Found 13 results
510(k) Data Aggregation
(71 days)
DIGIRAD CORP.
The ergo Imaging System is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel. The ergo Imaging System is used by trained medical personnel to perform nuclear medicine studies.
It is indicated for lymphatic scintigraphy and parathyroid scintigraphy, It can be used intraoperatively when protected by sterile drapes. It is also indicated to aid in the evaluation of lesions in the breast and other small body parts. When used for breast imaging, it is indicated to serve as an adjunct to mammography or other primary breast imaging modalities.
The ergo Imaging System incorporates Digirad's Solid State RIM detector design with 3mm pixels for general purpose planar imaging, cleared under K100838. Sterile drapes are specified for intraoperative use. The ergo Imaging System, in conjunction with the optional Breast Imaging Accessory (BIA), enables the user to perform scintimammography and extremity imaging with stabilization.
The provided text is a 510(k) summary for the Digirad ergo Imaging System, which is a gamma camera. The document primarily focuses on demonstrating substantial equivalence to a predicate device and expanding indications for use.
Based on the provided text, the device itself is a gamma camera, not an AI/ML-based device. The "Testing" section (H) explicitly states: "Verification and Validation tests were conducted to demonstrate the ergo Imaging System functions per specification. These tests include Electromagnetic Compatibility, Electrical Safety, and gamma camera performance testing including NEMA standard NU 1-2007 with phantoms."
This indicates that the acceptance criteria and performance evaluation are related to the physical performance of the gamma camera, not to algorithmic performance on image interpretation. Therefore, the requested information elements related to AI/ML device testing (such as ground truth establishment with experts, MRMC studies, standalone algorithm performance, training/test set sample sizes for algorithms, etc.) are not applicable to this submission as described.
The acceptance criteria are likely standard NEMA performance metrics for gamma cameras. While the document broadly states "Testing results demonstrate that the ergo Imaging System continues to meet the specifications," it does not list specific numerical acceptance criteria or performance metrics in a table format within this 510(k) summary.
Therefore, many of the requested items cannot be extracted from this specific document.
Here's an attempt to answer the quantifiable parts based on the provided text, while noting the limitations:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly listed as numerical targets in the summary. Implied to be compliance with NEMA NU 1-2007 standards for gamma camera performance.
- Reported Device Performance: Not explicitly listed as numerical results in the summary. The summary states: "Testing results demonstrate that the ergo Imaging System continues to meet the specifications and is substantially equivalent to the predicate devices, based on comparisons of intended use and technology, and overall system performance."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size: Not applicable in the context of an AI/ML test set. The testing described involves physical phantoms and engineering tests, not patient data sets.
- Data Provenance: Not applicable.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. Ground truth was established by physical phantoms and engineering measurements according to NEMA standards for gamma camera performance.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. This is not an AI/ML device, and no MRMC study is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is not an AI/ML diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Physical phantoms and engineering specifications/measurements (NEMA standard NU 1-2007).
8. The sample size for the training set
- Not applicable. This is not an AI/ML device; there is no training set mentioned.
9. How the ground truth for the training set was established
- Not applicable.
In summary, the provided document describes a traditional medical device (gamma camera) and its regulatory submission, which relies on engineering performance standards (like NEMA) rather than clinical studies involving AI/ML performance on patient data sets with human experts.
Ask a specific question about this device
(30 days)
DIGIRAD CORP.
The ergo Imaging System is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The proposed changes involve modifications to the 2020tc Imaging System to increase size of the detector head field of view (FOV) from 8"x 8" to 12"x 15". Modifications include mechanical and electrical design changes to support the large field-of-view (LFOV) detector head. The modified device (ergo Imaging System) incorporates Digirad's solid-state RIM detector design, modified with 3mm size pixels required for general purpose planar imaging. The 2020tc Imager detectors currently utilize the same 3mm pixel size.
The detectors used in the modified and predicate device utilize a pixelated, multi-crystal CsI scintillator detector with each pixel optically coupled to a low noise photodiode array. The charge detected from each gamma ray is amplified and processed using an amplifier circuit. The RIM detector design technology is currently used in the Digirad Cardius XPO imager systems with a 6mm pixel size for Cardiac SPECT imaging. The RIM detector has been modified to incorporate a 3mm size pixel required for general planar imaging. The RIM detector design includes electrical and mechanical configurations allowing for field replacement of detector modules and improved system performance (better energy resolution). The updated design of the RIM detector head assembly allows some of the current sub-systems to be moved into the detector head assembly (air dryer) and simplification of others (cooling and power distribution systems).
The modified device uses the 2020tc Imaging System SeeQuanta Acquisition software, with minor modifications required for use with the 3mm pixel size RIM detector modules.
The 2020tc Imager was initially marketed as the Digirad Notebook Imager (K961104), then re-branded as the 2020tc Imaging System (K982855) when it was used in conjunction, with the SPECTour Rotating Chair to obtain SPECT images in patients who are seated in an upright position. The modified device (ergo Imaging System) is a general purpose Nuclear Medicine Imaging device used for planar imaging, the same as the Notebook Imager/20201c Imager when imaging without the rotating chair.
The provided text describes a 510(k) premarket notification for the "ergo Imaging System," a scintillation (gamma) camera. The submission focuses on demonstrating substantial equivalence to predicate devices, rather than a standalone clinical study proving specific acceptance criteria in terms of diagnostic performance metrics like sensitivity or specificity.
Here's an analysis of the provided information based on your questions:
1. Table of acceptance criteria and the reported device performance
The document does not specify quantitative acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, accuracy) that are typically reported for AI/algorithm-based devices. Instead, it focuses on demonstrating functional equivalence and meeting design specifications compared to predicate devices. The acceptance criteria essentially revolve around the system performing as per its specifications, which are similar to the predicate device's functional specifications, and not raising new safety or effectiveness concerns.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Functions as per specifications | All tests passed with actual results substantially matching expected results. System meets design specifications. |
Similar functional specifications to predicate devices | Design specifications are similar to predicate device functional specifications. |
Equivalent efficacy to predicate devices (no new safety/effectiveness questions) | Digirad internal testing and phantom images demonstrated equivalent efficacy to predicate devices, and did not raise new questions regarding safety and effectiveness. |
Substantially equivalent to predicate devices (intended use, technology, overall system performance) | Testing results demonstrate the ergo Imaging System meets specifications and is substantially equivalent based on comparisons of intended use, technology, and overall system performance. |
2. Sample size used for the test set and the data provenance
The document does not explicitly mention a "test set" in the context of diagnostic performance (e.g., a set of patient images for evaluation). The testing described is primarily internal verification and validation of the system's technical specifications and phantom images. Therefore, details like data provenance or sample size for a diagnostic test set are not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The document describes technical testing and phantom images, not a clinical study involving experts establishing ground truth for diagnostic accuracy.
4. Adjudication method for the test set
Not applicable, as no diagnostic test set is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study is mentioned. This device is a gamma camera, a hardware imaging system, not an AI-assisted diagnostic software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is a hardware device; the concept of "standalone performance" for an algorithm doesn't directly apply in the same way it would for AI-driven software. The "system meets design specifications" and "equivalent efficacy to predicate devices" with phantom images represent its standalone performance relative to its intended function.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document refers to "phantom images" and "Digirad internal testing." This implies the "ground truth" for the testing was based on known properties of the phantoms and the expected technical performance of the imaging system. There is no mention of clinical ground truth like pathology or outcomes data.
8. The sample size for the training set
Not applicable. This document describes a hardware device submission, not a machine learning algorithm that requires a training set.
9. How the ground truth for the training set was established
Not applicable, as no training set is described.
Ask a specific question about this device
(90 days)
DIGIRAD CORP.
The Cardius 3 X-ACT Imaging System is a gamma camera for Single Photon Emission Computed Tomography (SPECT) integrated with an attenuation device consisting of an x-ray generator. The Cardius 3 X-ACT Imaging System is intended for use in the generation of cardiac studies, including planar and SPBCT studies, in nuclear medicine applications. Cardius 3 X-ACT produces non-attenuation corrected SPECT Images and attenuation corrected SPECT images with x-ray transmission data that may also be corrected for scatter.
The Cardius® 3 X-ACT Imaging system is a gamma camera for the acquisition and processing of Single Photon Emission Computed Tomography (SPECT) as well as correcting attenuation artifacts associated with these Emission studies. The device consists of an x-ray generator integrated with the previously cleared Cardius 3 XPO triple head SPECT system to provide attenuation correction functionality. The attenuation correction functionality will be included with the Cardius 3 X-ACT imaging system, or offered as an accessory for previously purchased Cardius 3 XPO systems that are configured to accept the attenuation correction functionality.
The Cardius 3 X-ACT Imaging system is designed to provide extended imaging functionality relative to the current Cardius XPO series imagers. A typical patient study comprises an emission (SPECT) study followed by a transmission study. The SPECT study is acquired using the similar system hardware and software technology as a Cardius-3 XPO imaging system. For the transmission study, the low dose x-ray generator is used to produce an attenuation map that is used for attenuation correction (AC) of the emission data. An iterative reconstruction technique then uses the attenuation map and the SPECT data as input, and the AC and non-AC reconstructed volumes are saved in the database for physician review. The attenuation correction data provided is additional information that may be reviewed by the interpreter. The original SPECT data remains available to the interpreter.
The provided text describes the Digirad Cardius 3 X-ACT Imaging System, a gamma camera for SPECT imaging with integrated X-ray-based attenuation correction. The document focuses on its substantial equivalence to predicate devices and does not contain a detailed study with specific acceptance criteria, reported performance metrics, or information on ground truth establishment and expert adjudication typical for a clinical study comparing AI performance against established criteria.
Therefore, many of the requested fields cannot be directly extracted from the provided text. The device received 510(k) clearance based on demonstrating substantial equivalence to existing predicate devices, rather than meeting specific quantitative performance acceptance criteria in a clinical study that would involve expert readers assessing AI output.
Here's a breakdown of what can and cannot be answered based on the provided input:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
(Not explicitly stated as quantitative acceptance criteria in the provided text) | The testing shows Cardius 3 X-ACT meets the design specifications, which are similar to the predicate device functional specifications. Digirad internal testing and clinical images obtained with the Cardius 3 X-ACT imaging system have demonstrated equivalent efficacy to the predicate devices, and did not raise new questions regarding safety and effectiveness. |
Demonstrates substantial equivalence to predicate devices based on intended use and technology | The FDA issued a substantial equivalence determination. |
Functions as per its specifications (bench testing) | All tests passed with the actual results substantially matching the expected results. |
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified. The document mentions "clinical images obtained with the Cardius 3 X-ACT imaging system" but does not quantify the number of images or patients.
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. The document mentions "physician review" but no details on expert qualifications for establishing ground truth are provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified. Given that the clearance is based on substantial equivalence and "equivalent efficacy," a formal adjudication process for a test set is not explicitly described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study Done: No. The device itself is an imaging system providing attenuation corrected images, not an AI assistance tool for human readers.
- Effect Size of Human Readers Improvement with AI: Not applicable, as this is not an AI-assisted diagnostic device. The attenuation correction data is described as "additional information that may be reviewed by the interpreter."
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Performance: Not explicitly evaluated in the context of an "algorithm only" performance against a ground truth in the way modern AI devices are. The device "produces non-attenuation corrected SPECT Images and attenuation corrected SPECT images with x-ray transmission data," implying the output is for physician review. The device's performance is implicit in its ability to produce these corrected images and its "equivalent efficacy" to predicate devices.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not specified. The basis for "equivalent efficacy" is not detailed, but it would likely be based on expert interpretation of images for relevant clinical findings (e.g., myocardial perfusion defects) compared to predicate devices. It is not stated if this involved pathology or outcomes data.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable. The device uses "Digirad's internally developed proprietary algorithms" (for reconstruction and attenuation correction) but does not describe these as "trained" in the typical machine learning sense with a distinct training set. The algorithms are part of the system's design and function.
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable for the reasons stated above.
Summary of what the document provides for regulatory clearance:
The device received 510(k) clearance by demonstrating substantial equivalence to existing predicate devices (Philips Medical Systems BrightView VCT Imaging System and Digirad's Cardius 3 XPO Imaging System). The equivalence was based on:
- Similar intended use: Gamma camera for SPECT imaging, generating cardiac studies, and providing attenuation-corrected images.
- Similar technology: Gamma camera for SPECT imaging and X-ray based attenuation correction.
- Design Specifications: Bench testing confirmed the device met its design specifications, which were similar to the predicate device's functional specifications.
- Equivalent Efficacy: Internal testing and clinical images demonstrated "equivalent efficacy" and did not raise new safety or effectiveness concerns compared to predicate devices.
The document does not detail a clinical study with quantitative performance metrics for the device against specific acceptance criteria, nor does it describe a ground truth process involving expert readers in the context of an AI device. Instead, it focuses on the engineering and functional demonstration of equivalence.
Ask a specific question about this device
(78 days)
DIGIRAD CORP.
The STASYS Motion Correction software program is intended for use in correcting patient motion artifacts in SPECT data acquired on a nuclear medicine gamma camera system.
STASYS™ is a software application developed by Digirad for the correction of SPECT acquisition motion artifacts from gated and non-gated projection datasets. When the program is activated, STASYS uses algorithms developed by Digirad to minimize motion error metrics over the set of acquired projections. The resulting STASYS corrected projections are presented to the operator for acceptance or rejection of the correction. With STASYS software, cardiac SPECT studies acquired with both parallel hole and non-parallel hole collimators, can be motion corrected. The STASYS software has the same indications for use and function as the Cedars-Sinai designed MoCo software, currently being used on Digirad SPECT imaging systems and processing workstations.
The provided document describes the STASYS™ Motion Correction Software, intended for correcting patient motion artifacts in SPECT data. The submission is a 510(k) summary, aiming to demonstrate substantial equivalence to previously cleared devices.
Based on the provided text, the following information can be extracted or inferred:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Software functions correctly and meets specifications | All tests passed with actual results matching expected results. |
Performance is substantially equivalent to predicate devices | Software performs as well as predicate devices. |
Safe and effective | Deemed as safe, effective, and performs as well as predicate devices. |
Intended use aligns with predicate devices | The indications for use are the same as the predicate devices. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not explicitly stated. The document mentions "Verification and Validation tests," but does not provide details on the number of SPECT studies or datasets used in these tests.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not explicitly stated. The document does not mention the use of experts to establish a ground truth for testing. The testing focuses on comparing the software's performance to its specifications and predicate devices.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not explicitly stated. There is no mention of an adjudication method in the testing description. The evaluation appears to be based on whether test results matched expected results and if the software performed equivalently to predicate devices.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned. The document describes the software's ability to correct motion artifacts and its equivalence to predicate devices, but it does not evaluate the improvement in human reader performance with or without the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the testing described appears to be a standalone evaluation of the algorithm. The software is designed to "minimize motion error metrics" and produce "corrected projections," which are then "presented to the operator for acceptance or rejection." The verification and validation tests assess the software's performance against specifications and predicate devices, suggesting an evaluation of the algorithm's output independently, even if human review is the final step in clinical use.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly define a "ground truth" in the traditional sense for diagnostic accuracy. Instead, the testing framework seems to rely on:
- Specifications: Whether the software's output aligned with pre-defined expected results for motion correction.
- Predicate Device Performance: Comparison of the STASYS software's performance to the established performance of the cleared Cedars-Sinai MoCo software. This implies that the predicate devices' outputs or their established effectiveness served as a benchmark for "ground truth" regarding desired motion correction.
8. The sample size for the training set
Not applicable/Not explicitly stated. The document refers to "algorithms developed by Digirad" and "internally developed proprietary algorithms," but it does not mention a "training set" or a machine learning model that would require one in the context of typical AI/ML development. This appears to be a rule-based or traditional signal processing algorithm rather than a data-driven machine learning model requiring a distinct training phase with labeled data.
9. How the ground truth for the training set was established
Not applicable/Not explicitly stated. As mentioned above, the document does not indicate the use of a training set for a machine learning model.
Ask a specific question about this device
(25 days)
DIGIRAD CORP.
Cardius-1, Cardius-2, Cardius-3, Cardius 1 XPO, Cardius 3 XPO, Cardius 3 XPO Imaging Systems:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
2020tc SPECT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuclear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020tc Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The proposed change involves an updated version of nSPEED™ (3D-OSEM) reconstruction software to process cardiac SPECT studies acquired with non-parallel hole collimators, using half time and/or half count densities. With the updated software, cardiac SPECT studies acquired with both parallel hole and non-parallel hole collimators, using half time and/or half count densities, can be processed.
Here's an analysis of the provided text to extract the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Equivalent image quality between: |
- Previously cleared 2D-OSEM reconstruction (processing "preferred" time/count data)
- New nSPEED™ (3D-OSEM) reconstruction (processing half-time and/or half-count density data) | Digirad testing performed with cardiac phantom images and a multicenter evaluation with data from over 450 patient images showed equivalent image quality between the two processing methods. |
| Very good quantitative correlation between: - Previously cleared 2D-OSEM reconstruction (processing "preferred" time/count data)
- New nSPEED™ (3D-OSEM) reconstruction (processing half-time and/or half-count density data) | Digirad testing performed with cardiac phantom images and a multicenter evaluation with data from over 450 patient images showed very good quantitative correlation between the two processing methods. |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Over 450 patient images
- Data Provenance: Multicenter evaluation; acquired with parallel hole and non-parallel hole collimators using Digirad imaging systems. The location of the centers (e.g., country of origin) is not specified. The study appears to be retrospective, as it uses "data from over 450 patient images acquired."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
This information is not provided in the submitted text. The submission mentions "Digirad testing" and "multicenter evaluation" but does not detail how ground truth was established or if experts were involved in a formal capacity for image quality assessment or quantitative correlation.
4. Adjudication Method for the Test Set:
This information is not provided in the submitted text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not explicitly done in the context of human readers improving with AI vs. without AI assistance.
- The study focuses on demonstrating the equivalence of a new reconstruction software (nSPEED™ 3D-OSEM) compared to an older one (2D-OSEM) for processing reduced dose/time cardiac SPECT data. It assesses image quality and quantitative correlation of the output images, not the performance of human readers with or without an AI assist. This is a technical performance study of image processing, not a clinical efficacy study with human readers as the primary endpoint.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation of the algorithm's output was done. The study's conclusion is that the new nSPEED™ software "produce[s] equivalent image quality and very good quantitative correlation" when compared to the previously cleared 2D-OSEM technique. This assessment of image quality and quantitative correlation is an evaluation of the algorithm's output in a standalone manner.
7. The Type of Ground Truth Used:
- The ground truth reference appears to be the images processed with the previously cleared 2D-OSEM reconstruction technique using "preferred" time and/or count data. The new software's output is compared to this established and cleared method. This could be considered a form of clinical consensus or established practice as defined by the "ASNC 2008 Imaging Guidelines." It is not directly pathology or outcomes data.
8. The Sample Size for the Training Set:
The text does not provide any information regarding a training set size. This indicates that the software update was likely evaluated for its performance characteristics on clinical data rather than being a de novo AI model that required a separate training phase to learn a specific task. Reconstruction algorithms are typically developed and then validated.
9. How the Ground Truth for the Training Set Was Established:
As no training set is mentioned or implied for this specific submission, there is no information on how its ground truth might have been established.
Ask a specific question about this device
(25 days)
DIGIRAD CORP.
Cardius 1 XPO, Cardius 2 XPO, Cardius 3 XPO Imaging Systems:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies. in nuclear medicine applications.
2020tc SPECT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuclear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020tc Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
Mirage XP is an automated processing and interpretation software package. This software will be available as standard software on the Digirad imaging systems and/or as a standalone software package on a workstation or a laptop. The enhancements to previous versions of software include automated processing, preference based selections, improved EF algorithm and segment scoring for quantification and interpretation.
The provided text is a 510(k) summary for a medical device (SPECT Imaging System with new software). It describes the changes to the device (software update), its intended use, and the conclusion from testing. However, it does not include specific acceptance criteria, detailed study results, or information about ground truth establishment, expert adjudication, or training/test set sample sizes as requested in the prompt.
The core conclusion from the provided text is that the new software functions as intended and produces images equivalent to previous versions, without impacting safety or effectiveness. This is a general statement rather than a detailed performance report against specific criteria.
Therefore, I cannot populate the table and answer all questions based on the provided text alone. I will indicate where information is Not Available (N/A) from the text.
Acceptance Criteria and Device Performance Study Summary
The provided 510(k) summary focuses on a software update (Mirage XP, equivalent to Segami's Mirage 5.6) for existing Digirad SPECT imaging systems. The study detailed in the summary is primarily a functional verification and equivalence assessment rather than a detailed performance study against numerical acceptance criteria.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria (as implied or stated) | Reported Device Performance |
---|---|---|
Functional Equivalence | The new Mirage XP software can be installed and functions as intended on Digirad imaging systems. | Confirmed. Software installs and functions as intended. |
Safety | Use of Mirage XP software (with existing acquisition/reconstruction) does not result in known anomalies impacting safety. | No known anomalies impacting safety. |
Effectiveness | Use of Mirage XP software (with existing acquisition/reconstruction) does not result in known anomalies impacting effectiveness, including operator usage and human factors. | No known anomalies impacting effectiveness, including operator usage and human factors. |
Image Quality | Quality of images produced with Mirage XP software is equivalent to those seen in previous versions of Mirage software used on Digirad imaging systems. | Image quality is equivalent to previous versions of Mirage software. |
Automated Processing | Specific performance criteria for enhancements (e.g., speed, accuracy of segmentation, EF algorithm improvements) are not specified. | Enhancements include automated processing, preference-based selections, improved EF algorithm, and segment scoring for quantification and interpretation. No specific performance metrics or acceptance criteria for these improvements are provided in this summary. |
2. Sample size used for the test set and the data provenance
- Sample Size: Not Available (N/A). The document states "Testing was done," but does not specify the number of cases, scans, or an exact test set size.
- Data Provenance: Not Available (N/A). The document does not mention the country of origin of the data or whether the data was retrospective or prospective. It only implies that the testing was likely conducted internally by Digirad Corporation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not Available (N/A).
- Qualifications of Experts: Not Available (N/A).
- The comparison for image quality ("equivalent to those seen in previous versions") suggests an internal subjective assessment, but no details on expert involvement are provided.
4. Adjudication method for the test set
- Adjudication Method: Not Available (N/A). No information on adjudication methods for the test set (e.g., 2+1, 3+1, none) is provided.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done or reported in this summary. The device in question is software for SPECT imaging system processing and interpretation, which includes "automated processing, preference based selections, improved EF algorithm and segment scoring for quantification and interpretation," but it's not explicitly described as an "AI assistance" device in the context of improving human reader performance.
- Effect Size: Not Available (N/A). Since an MRMC study was not reported, there's no information on the effect size of human reader improvement with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, implicitly. The software "Mirage XP is an automated processing and interpretation software package." The statement that "Mirage XP software can be installed and functions as intended" and produces "equivalent" images suggests that the algorithm's standalone functional and image output capabilities were assessed. However, no specific standalone performance metrics (e.g., detection sensitivity/specificity for particular conditions) were provided, as the focus was on functional equivalence to previous versions.
7. The type of ground truth used
- Type of Ground Truth: Not explicitly stated. The comparison is primarily against the performance of previous versions of the software and images, implying a "referent standard" of previously accepted image quality and processing outputs. It does not mention an independent ground truth like pathology, outcomes data, or a consensus of multiple expert interpretations for clinical findings.
8. The sample size for the training set
- Sample Size for Training Set: Not Available (N/A). This document is about a new release of pre-existing software. Information on the training set used during the development of Mirage 5.6 (or Mirage XP) is not included in this 510(k) summary.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not Available (N/A). As the training set size and details are not provided, neither is information on how its ground truth was established.
Ask a specific question about this device
(28 days)
DIGIRAD CORP.
Cardius-1, Cardius-2, Cardius-3:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
2020tc SPECT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuclear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020te Imager™ is intended to image the distribution of radionuctides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The changes to the Digirad 2020tc and Cardius SPECT imaging cameras involve addition of an Image Stabilization System. The proposed Image Stabilization System is used to correct image studies for patient motion in SPECT data acquired with Digirad Nuclear medicine gamma camera systems. The Image Stabilization System consists of two parts: a Hardware component that mounts to the SPECT Imaging System, and a software module that collects data from the hardware and corrects the image data for motion. The resulting motion corrected patient study data is referred to as the Image Stabilized Patient Study. The Image Stabilization system may operate only with the above described Digirad Camera models and is compatible with proprietary Digitad Acquisition Software under the Windows Operating system and standard PC architecture.
The proposed Image Stabilization System performs substantially the same function as the currently cleared Cedar's Sinai Motion Correction Program (MoCo), cleared for use on Digirad SPECT Imaging Systems under Digirad 510(k) #K023110.
The proposed Image Stabilization System automatically produces an Image Stabilized Patient Study, corrected for patient motion, which is available in the existing database. The original image study is produced in an identical manner as in the previously cleared devices. Both studies are stored in the same patient record in the database. Additional minor changes were made to the User Interface screen.
The Image Stabilized Patient Studies produced by the proposed device are identical in file structure to the original, unmodified data set; therefore SeeQuanta 1.2 and the Image Stabilization System are fully compatible with the same database, reconstruction software, and processing software that is used with the "cleared" devices. Hence, there are no changes to these software modules.
This proposed optional software addition will be available to Digirad customers both integrated with the Digirad 2020tc SPECT Imaging System, and Cardius-1, Cardius-2, and Cardius-3 SPECT Imaging Systems, and separately as a retrofit device for existing Digirad Product Customers.
Here's an analysis of the provided text regarding the KD5243D 510(k) submission, focusing on the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text focuses on the equivalence of the Image Stabilization System to existing predicate devices, rather than establishing new performance metrics. Therefore, explicit numerical acceptance criteria and a direct comparison table as might be seen for a diagnostic accuracy study are not present. Instead, the "acceptance criteria" are implied to be "similar quality" of corrected images.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Image Quality | The quality of the phantom images corrected with the Image Stabilization System with the modified acquisition software was similar to the quality of the images post-processed corrected using the MoCo Motion Correction program (a predicate device). |
Functionality | The Image Stabilization System performs substantially the same function as the currently cleared Cedar's Sinai Motion Correction Program (MoCo) and other predicate devices (Mirage software, Cedars-Sinai BPGS and MoCo). |
Design Outputs | Extensive Verification testing was completed on all cleared Digirad SPECT Imaging Systems integrated with the Image Stabilization Device to demonstrate that the design outputs met the design inputs of the proposed Image Stabilization Accessory Device. All software test results met pre-defined acceptance criteria (specific criteria not detailed). |
Data Compatibility | Image Stabilized Patient Studies are identical in file structure to the original, unmodified data set, ensuring full compatibility with existing database, reconstruction, and processing software. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The text states, "Testing was performed to analyze the content of corrected phantom studies using the Image Stabilization Accessory integrated with the Digirad Cardius-1 camera." It does not specify the number of phantom studies included in this analysis.
- Data Provenance: The testing was performed using "phantom studies," indicating simulated or controlled data rather than patient data. The country of origin for this data is not specified but is presumed to be internal testing by Digirad Corporation (USA). The study is prospective in the sense that the new device was used to correct specific phantom data, but the data itself is not from real patients.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the text. Since the testing involved phantom studies and comparison to a predicate device's output, it's unlikely that a panel of medical experts was used to establish ground truth in the traditional sense. The "ground truth" for phantom studies is typically defined by the known characteristics of the phantom and the expected ideal image.
4. Adjudication Method for the Test Set
This information is not provided. Given the nature of the testing (phantom studies comparing image quality to a predicate), a formal adjudication method like 2+1 or 3+1 is unlikely to have been employed. The comparison was likely a technical assessment of image characteristics.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on the technical performance and similarity of image quality for the motion correction system itself, not on the impact of this correction on human reader performance in interpreting patient studies.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was conducted. The text describes "Testing was performed to analyze the content of corrected phantom studies using the Image Stabilization Accessory" and "extensive Verification testing was completed on all cleared Digirad SPECT Imaging Systems integrated with the Image Stabilization Device to demonstrate that the design outputs met the design inputs." This indicates the algorithm's performance in correcting phantom images was evaluated independently.
7. The Type of Ground Truth Used
The ground truth used was based on phantom studies and comparison to the output of predicate device software (MoCo). For a phantom, the "ground truth" is the known ideal image of the phantom without motion artifacts. The acceptance criterion was that the corrected phantom images from the new system were "similar in quality" to those corrected by the predicate MoCo program.
8. The Sample Size for the Training Set
This information is not provided. The submission describes the addition of a new hardware and software module for image stabilization. It does not mention whether this module uses a machine learning algorithm that requires a training set. If it's a traditional image processing algorithm, a "training set" in the machine learning sense might not apply.
9. How the Ground Truth for the Training Set Was Established
Since a training set is not mentioned and it's unclear if machine learning was used, the method for establishing its ground truth is not applicable/not provided.
Ask a specific question about this device
(30 days)
DIGIRAD CORP.
Cardius-1, Cardius-2, Cardius-3:
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
2020tc SPE.CT Imaging System:
The Digirad 2020tc SPECT Imaging system is intended for use in the generation of both planar and Single Photon Emission Computed Tomography (SPECT) clinical images in nuc ear medicine applications. The Digirad SPECT Rotating Chair is used in conjunction with the Digirad 2020tc Imager™ to obtain SPECT images in patients who are seated in an upright position.
Specifically, the 2020te Imager™ is intended to image the distribution of radionuclides in the body by means of a photon radiation detector. In so doing, the system produces images depicting the anatomical distribution of radioisotopes within the human body for interpretation by authorized medical personnel.
The changes to the Cardius and 2020tc cameras involve modifications to the data acquisition software used on the gamma cameras. The primary change to the data acquisition software involves the addition of a Camera Center-of-Rotation (COR) quantitative check. Additional minor changes were made to the User Interface screen. There were no hardware changes to the cameras.
The provided text describes modifications to the software of existing SPECT imaging systems (Cardius and 2020tc cameras). The primary change involved adding a Camera Center-of-Rotation (COR) quantitative check and minor user interface adjustments. The core functionality and intended use of the devices remained unchanged.
Here's an analysis of the acceptance criteria and study information, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Pre-defined acceptance criteria for software test results | All software test results met pre-defined acceptance criteria. |
Quality of clinical images with modified software | The quality of the clinical images produced with the modified software was similar to the quality of the images produced with the unmodified software. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "clinical imaging with modified and unmodified software." However, it does not specify the sample size used for this clinical imaging (the test set) or the data provenance (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not specify the number of experts used or their qualifications for establishing ground truth, as it focuses on software verification and clinical image quality comparison.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method. It implies a comparison of image quality, but the process of this comparison (e.g., blinded review, consensus) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done or reported. The study focused on demonstrating that the revised software did not degrade image quality or system performance compared to the previous version. It does not assess human reader improvement with AI assistance, as the changes are to the acquisition software, not an AI-assisted interpretation tool.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) was Done
The primary change was the addition of a "Camera Center-of-Rotation (COR) quantitative check" and minor UI changes. This appears to be an internal technical algorithm within the acquisition software, not an independently evaluated "standalone" diagnostic algorithm. The testing described is more aligned with software verification and validation, ensuring the system's technical function, rather than diagnostic accuracy as a standalone AI. So, based on the information provided, a standalone study in the context of diagnostic AI performance was not done or reported.
7. The Type of Ground Truth Used
The most relevant "ground truth" implicitly used in this context would be the performance of the unmodified software/system as the benchmark for comparison. The goal was to demonstrate that the modified software produced "similar" quality images and met pre-defined technical acceptance criteria. There's no mention of external clinical ground truth like pathology or patient outcomes.
8. The Sample Size for the Training Set
This document describes software updates to an existing medical imaging system. It does not mention a training set in the context of machine learning. The changes are to data acquisition software, not an AI model that would typically require a training set.
9. How the Ground Truth for the Training Set was Established
As no training set is mentioned (see point 8), the method for establishing its ground truth is not applicable.
Ask a specific question about this device
(27 days)
DIGIRAD CORP.
The Cardius product models are intended for use in the generation of cardiac studies, including planar and Single Photon Emission Computed Tomography (SPECT) studies, in nuclear medicine applications.
The Cardius model cameras are gantry (open gantry, upright seated) devices designed for single- (Cardius-1) or dual-detector (Cardius-2) cardiac nuclear imaging studies. The device includes an acquisition/processing station, a gantry, one or two detectors, and a gantrymounted patient imaging chair. The patient imaging chair is mechanized to accommodate patient loading and cardiac centering in the detector field of view.
The imaging chair motion control allows for acquisition of Cardiac SPECT and planar studies, including static, dynamic, gated/non-gated SPECT (circular orbit only).
The provided text describes the regulatory clearance for the Digirad Cardius-1 and Cardius-2 SPECT imaging systems, focusing on their substantial equivalence to predicate devices. However, it does not contain the detailed information required to fill in all sections of the requested acceptance criteria and study design.
Specifically, the document states: "Comprehensive verification and validation testing was performed with the Cardius-1 and Cardius-2 devices including; hardware, software, electrical safety, and clinical imaging. All test results met pre-defined acceptance criteria. The quality of the clinical images produced were similar to the quality of the images produced by the predicate devices."
This statement confirms that testing was done and acceptance criteria were met, but it does not provide the specific acceptance criteria, reported device performance metrics, or details about the clinical study itself.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The text only states that "All test results met pre-defined acceptance criteria" and "The quality of the clinical images produced were similar to the quality of the predicate devices." It does not specify what those acceptance criteria were (e.g., specific sensitivity, specificity, image quality metrics) or the exact reported performance against those criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. The document vaguely mentions "clinical imaging" as part of the testing. There is no information on the sample size of any test set, the country of origin of data, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Cannot be provided. No information is given about how ground truth was established or by whom.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. No information on adjudication is present.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Cannot be provided. The device is a SPECT imaging system, not explicitly an AI/CAD device for interpretation. The document doesn't mention any MRMC study or AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable/Cannot be determined. This device is an imaging system (hardware and software for acquisition and basic processing), not a standalone diagnostic algorithm. No information about algorithm-only performance is available.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Cannot be provided. The document does not specify the type of ground truth used for any clinical testing.
8. The sample size for the training set
- Cannot be provided. There is no mention of a "training set." The device is an imaging system, and while it would have been developed using some data, the text doesn't describe a formal "training set" in the context of an AI/ML algorithm.
9. How the ground truth for the training set was established
- Cannot be provided. As there's no mention of a training set, there's no information on how its ground truth was established.
Summary of available information:
The document focuses on establishing substantial equivalence to predicate devices (Digirad 2020tc SPECT Imaging System and ADAC Forte). The primary evidence cited for meeting acceptance criteria is that "All test results met pre-defined acceptance criteria" and "The quality of the clinical images produced were similar to the quality of the images produced by the predicate devices." This implies that the acceptance criteria primarily revolved around demonstrating performance similar to or not worse than existing cleared devices, rather than establishing absolute performance benchmarks with detailed clinical study metrics as would be expected for a novel diagnostic algorithm.
Ask a specific question about this device
(64 days)
DIGIRAD CORP.
The Cedars-Sinai Motion Correction (MoCo) software program is intended for use in correcting patient motion artifacts in SPECT data acquired on a nuclear medicine gamma camera system.
The MoCo program is an independent, standalone software application developed by Cedars-Sinai Medical Center for the automatic and manual correction of SPECT acquisition motion artifacts from gated and ungated projection datasets. MoCo is the most popular motion correction application in the field of nuclear myocardial perfusion SPECT imaging. The software has the same indication for use and function as the Motion Correction function module of Mirage software (Segami Corporation, K972886), which is currently being used on Digirad 2020tc SPECT Imaging systems.
The provided document describes the Cedars-Sinai Motion Correction (MoCo) Software, a standalone application for correcting patient motion artifacts in SPECT data. However, it does not contain information regarding detailed acceptance criteria, a specific study proving it meets these criteria, or a table comparing acceptance criteria to device performance metrics beyond a general statement of "Functionality tests were conducted to demonstrate that the MoCo software application functioned as per its specifications. All tests passed with the actual results substantially matching the expected results."
Here's an analysis of the available information, noting what is missing based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
Not available in the provided document. The submission states that functionality tests were conducted and passed, but it does not specify the quantitative acceptance criteria or the reported performance metrics (e.g., specific accuracy, sensitivity, or specificity values) from these tests.
2. Sample Size Used for the Test Set and Data Provenance
Not available in the provided document. The document mentions "functionality tests" but does not detail the number of cases or datasets used in these tests. Information on the country of origin of the data or whether it was retrospective or prospective is also not provided.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
Not available in the provided document. The method and number of experts for establishing ground truth for any test set are not mentioned.
4. Adjudication Method for the Test Set
Not available in the provided document. No information is provided regarding any adjudication method (e.g., 2+1, 3+1, none) used for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Not available in the provided document. There is no mention of an MRMC study being conducted, nor any effect size regarding human reader improvement with or without AI assistance.
6. Standalone Algorithm Performance
The document states that the MoCo program is an "independent, standalone software application." The functionality tests described imply a standalone evaluation, where the software's performance against its specifications was assessed. However, specific performance metrics are not given.
7. Type of Ground Truth Used
Not available in the provided document. The document does not specify how the "expected results" for the functionality tests were determined or what type of ground truth (e.g., expert consensus, pathology, outcomes data) was used.
8. Sample Size for the Training Set
Not available in the provided document. The document does not provide any information about a training set or its sample size.
9. How Ground Truth for the Training Set Was Established
Not available in the provided document. Since no training set is mentioned, there is no information on how its ground truth was established.
Summary of available information related to testing:
The document mentions "Functionality tests were conducted to demonstrate that the MoCo software application functioned as per its specifications. All tests passed with the actual results substantially matching the expected results." This indicates that some form of internal validation was performed to ensure the software performed as designed, but the details of this validation (specific criteria, data used, and quantitative outcomes) are not present in this submission. The submission primarily focuses on the device's classification, intended use, and substantial equivalence to predicate devices, rather than a detailed performance study with quantifiable acceptance criteria.
Ask a specific question about this device
Page 1 of 2