Search Results
Found 1 results
510(k) Data Aggregation
(47 days)
TruSPECT is intended for acceptance, transfer, display, storage, and processing of images for detection of radioisotope tracer uptakes in the patient's body. The device using various processing modes supported by the various clinical applications and various features designed to enhance image quality. The emission computerized tomography data can be coupled with registered and/or fused CT/MR scans and with physiological signals in order to depict, localize, and/or quantify the distribution of radionuclide tracers and anatomical structures in scanned body tissue for clinical diagnostic purposes. The acquired tomographic image may undergo emission-based attenuation correction.
Visualization tools include segmentation, colour coding, and polar maps. Analysis tools include Quantitative Perfusion SPECT (QPS), Quantitative Gated SPECT (QGS) and Quantitative Blood Pool Gated SPECT (QBS) measurements, Multi Gated Acquisition (MUGA) and Heart-to-Mediastinum activity ratio (H/M).
The system also includes reporting tools for formatting findings and user selected areas of interest. It is capable of processing and displaying the acquired information in traditional formats, as well as in three-dimensional renderings, and in various forms of animated sequences, showing kinetic attributes of the imaged organs.
TruSPECT is based on Windows operating system. Due to special customer requirements and the clinical focus the TruSPECT can be configured with different combinations of Windows OS based software options and clinical applications which are intended to assist the physician in diagnosis and/or treatment planning. This includes commercially available post-processing software packages.
TruSPECT is a processing workstation primarily intended for, but not limited to cardiac applications. The workstation can be integrated with the D-SPECT cardiac scanner system or used as a standalone post-processing station.
The TruSPECT Processing Station is a software-only medical device (SaMD) designed to operate on a dedicated, high-performance computer platform. It is distributed as pre-installed medical imaging software intended to support image visualization, quantitation, analysis, and comparison across multiple imaging modalities and acquisition time points. The software supports both functional imaging modalities, such as Single Photon Emission Computed Tomography (SPECT) and Nuclear Medicine (NM), as well as anatomical imaging modalities, such as Computed Tomography (CT).
The system enables integration, display, and analysis of multimodal image datasets to assist qualified healthcare professionals in image review and interpretation within the clinical workflow. The software is intended for use by trained medical professionals and assists in image assessment for various clinical applications, including but not limited to cardiology, electrophysiology, and organ function evaluation. The software does not perform automated diagnosis and does not replace the clinical judgment of the user.
The TruSPECT software operates on the Microsoft Windows® operating system and can be configured with various software modules and clinical applications according to user requirements and intended use. The configuration may include proprietary Spectrum Dynamics modules and commercially available third-party post-processing software packages operating within the TruSPECT framework.
The modified TruSPECT system integrates the TruClear AI application as part of its software suite. The TruClear AI module is a software-based image processing component designed to assist in the enhancement of SPECT image data acquired on the TruSPECT system. The module operates within the existing reconstruction and review workflow and does not alter the system's intended use, indications for use, or fundamental technology.
Verification and validation activities were performed to confirm that the addition of the TruClear AI module functions as intended and that overall system performance remains consistent with the previously cleared TruSPECT configuration. These activities included performance evaluations using simulated phantom datasets and representative clinical image data, conducted in accordance with FDA guidance. The results demonstrated that the modified TruSPECT system incorporating TruClear AI meets all predefined performance specifications and continues to operate within the parameters of its intended clinical use.
Here's a breakdown of the acceptance criteria and study details for the TruClear AI module of the TruSPECT Processing Station, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
| Parameter | Acceptance Criteria | Reported Device Performance (Key Performance Results) |
|---|---|---|
| LVEF | Bland Altman Mean: ±3% | Strong correlation (r=0.94). Bland–Altman analyses showed mean differences within pre-specified acceptance criteria. |
| Bland Altman SD: ≤ 4% | (Implicitly met as mean differences were within criteria) | |
| Regression r (min): > 0.8 | r=0.94 | |
| Slope (range): 0.9 – 1.1 | (Implicitly met as mean differences were within criteria) | |
| Intercept (limit): ± 10% | (Implicitly met as mean differences were within criteria) | |
| EDV | Bland Altman Mean: ± 5 ml | Strong correlation (r=0.98). Bland–Altman analyses showed mean differences within pre-specified acceptance criteria. |
| Bland Altman SD: ≤ 8 ml | (Implicitly met as mean differences were within criteria) | |
| Regression r (min): > 0.8 | r=0.98 | |
| Slope (range): 0.9 – 1.1 | (Implicitly met as mean differences were within criteria) | |
| Intercept (limit): ± 10 ml | (Implicitly met as mean differences were within criteria) | |
| Perfusion Volume | Bland Altman Mean: ± 5 ml | Strong correlation. Bland–Altman analyses showed mean differences within pre-specified acceptance criteria. |
| Bland Altman SD: ≤ 8 ml | (Implicitly met as mean differences were within criteria) | |
| Regression r (min): > 0.8 | (Implicitly met as strong correlation noted) | |
| Slope (range): 0.9 – 1.1 | (Implicitly met as mean differences were within criteria) | |
| Intercept (limit): ± 10 ml | (Implicitly met as mean differences were within criteria) | |
| TPD | Bland Altman Mean: ± 3% | Strong correlation (r=0.98). Bland–Altman analyses showed mean differences within pre-specified acceptance criteria. |
| Bland Altman SD: ≤ 5% | (Implicitly met as mean differences were within criteria) | |
| Regression r (min): > 0.8 | r=0.98 | |
| Slope (range): 0.9 – 1.1 | (Implicitly met as mean differences were within criteria) | |
| Intercept (limit): ± 10% | (Implicitly met as mean differences were within criteria) | |
| Visual Similarity (Denoised vs. Reference) | (Not explicitly quantified as a numeric acceptance criterion range, but implied) | Denoised images were 'similar' to reference, consistent with high inter-reader agreement. Visual similarity ratings indicated denoised images were 'similar' to reference. |
| Inter-observer Agreement (Visual Comparison) | (Not explicitly quantified as an acceptance criterion) | 97–100% after dichotomization (scores ≥3 vs <3) across key metrics. |
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: 24 patients (8 female, 16 male), which yielded 74 images.
- Data Provenance: Multi-center, retrospective dataset from three hospitals in the UK and Germany.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Two (2)
- Qualifications of Experts: Independent, board-certified nuclear medicine physicians.
-
Adjudication method for the test set:
- The document states "two independent, board-certified nuclear medicine physicians visually compared denoised low-count images to the high-count reference using a 5-point Likert scale; inter-observer percent agreement after dichotomization (scores ≥3 vs <3) was 97–100% across key metrics." This suggests a consensus-based approach for establishing some aspect of the ground truth, particularly for the visual similarity assessment, though not explicitly a formal 2+1 or 3+1 adjudication for defining disease status. The reference standard itself was the high-count image, and the experts were comparing the derived AI-processed images to this reference.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC comparative effectiveness study was not explicitly described in terms of human readers improving with AI vs. without AI assistance. The study focused on validating the AI algorithm's output against a reference standard (high-count image) using visual and quantitative assessment. The two nuclear medicine physicians visually compared the denoised images to the reference, not their own diagnostic performance with and without AI.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment of the algorithm was conducted. The quantitative evaluation using the FDA-cleared Cedars-Sinai QPS/QGS to derive perfusion and functional parameters (TPD, volume, EDV, LVEF) directly compared the algorithm's output on low-count images (after denoising) to the high-count reference images. The Bland-Altman and correlation analyses are indicators of standalone performance.
-
The type of ground truth used:
- The primary reference standard (ground truth) for the study was the clinical routine high-count SPECT image (~1.0 MCounts) acquired under standard D-SPECT protocols.
- For quantitative parameters, FDA-cleared Cedars-Sinai QPS/QGS was used on the high-count reference images to derive the ground truth values for perfusion and functional parameters (TPD, volume, EDV, LVEF).
- For visual assessment, the "high-count reference" images served as the ground truth for comparison.
-
The sample size for the training set:
- The total dataset was 352 patients. The training/tuning set consisted of a portion of these patients; specifically, the "held-out test set" was 24 patients, meaning the remaining 328 patients (352 - 24) were used for training and tuning the algorithm.
-
How the ground truth for the training set was established:
- The document implies the same ground truth methodology was used for the training set as for the test set. The algorithm was trained to transform low-count images to effectively match the characteristics of the clinical routine high-count SPECT image as the "gold standard." The Cedars-Sinai QPS/QGS would also have been used on these high-count images to generate the quantitative targets for training, allowing the AI to learn to derive similar quantitative parameters from denoised low-count images.
Ask a specific question about this device
Page 1 of 1