(78 days)
The STASYS Motion Correction software program is intended for use in correcting patient motion artifacts in SPECT data acquired on a nuclear medicine gamma camera system.
STASYS™ is a software application developed by Digirad for the correction of SPECT acquisition motion artifacts from gated and non-gated projection datasets. When the program is activated, STASYS uses algorithms developed by Digirad to minimize motion error metrics over the set of acquired projections. The resulting STASYS corrected projections are presented to the operator for acceptance or rejection of the correction. With STASYS software, cardiac SPECT studies acquired with both parallel hole and non-parallel hole collimators, can be motion corrected. The STASYS software has the same indications for use and function as the Cedars-Sinai designed MoCo software, currently being used on Digirad SPECT imaging systems and processing workstations.
The provided document describes the STASYS™ Motion Correction Software, intended for correcting patient motion artifacts in SPECT data. The submission is a 510(k) summary, aiming to demonstrate substantial equivalence to previously cleared devices.
Based on the provided text, the following information can be extracted or inferred:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Software functions correctly and meets specifications | All tests passed with actual results matching expected results. |
Performance is substantially equivalent to predicate devices | Software performs as well as predicate devices. |
Safe and effective | Deemed as safe, effective, and performs as well as predicate devices. |
Intended use aligns with predicate devices | The indications for use are the same as the predicate devices. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not explicitly stated. The document mentions "Verification and Validation tests," but does not provide details on the number of SPECT studies or datasets used in these tests.
- Data Provenance: Not explicitly stated. The document does not specify the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not explicitly stated. The document does not mention the use of experts to establish a ground truth for testing. The testing focuses on comparing the software's performance to its specifications and predicate devices.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not explicitly stated. There is no mention of an adjudication method in the testing description. The evaluation appears to be based on whether test results matched expected results and if the software performed equivalently to predicate devices.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned. The document describes the software's ability to correct motion artifacts and its equivalence to predicate devices, but it does not evaluate the improvement in human reader performance with or without the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the testing described appears to be a standalone evaluation of the algorithm. The software is designed to "minimize motion error metrics" and produce "corrected projections," which are then "presented to the operator for acceptance or rejection." The verification and validation tests assess the software's performance against specifications and predicate devices, suggesting an evaluation of the algorithm's output independently, even if human review is the final step in clinical use.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly define a "ground truth" in the traditional sense for diagnostic accuracy. Instead, the testing framework seems to rely on:
- Specifications: Whether the software's output aligned with pre-defined expected results for motion correction.
- Predicate Device Performance: Comparison of the STASYS software's performance to the established performance of the cleared Cedars-Sinai MoCo software. This implies that the predicate devices' outputs or their established effectiveness served as a benchmark for "ground truth" regarding desired motion correction.
8. The sample size for the training set
Not applicable/Not explicitly stated. The document refers to "algorithms developed by Digirad" and "internally developed proprietary algorithms," but it does not mention a "training set" or a machine learning model that would require one in the context of typical AI/ML development. This appears to be a rule-based or traditional signal processing algorithm rather than a data-driven machine learning model requiring a distinct training phase with labeled data.
9. How the ground truth for the training set was established
Not applicable/Not explicitly stated. As mentioned above, the document does not indicate the use of a training set for a machine learning model.
§ 892.1200 Emission computed tomography system.
(a)
Identification. An emission computed tomography system is a device intended to detect the location and distribution of gamma ray- and positron-emitting radionuclides in the body and produce cross-sectional images through computer reconstruction of the data. This generic type of device may include signal analysis and display equipment, patient and equipment supports, radionuclide anatomical markers, component parts, and accessories.(b)
Classification. Class II.