Search Results
Found 1 results
510(k) Data Aggregation
(25 days)
QUANTITATIVE GATED SPECT (QGS), QUANTITATIVE PERFUSION SPECT (QPS)
The Quantitative Gated SPECT (QGS) and Quantitative Perfusion SPECT (QPS) programs are intended for use in the display and quantification of myocardial perfusion and functional parameters from cardiac SPECT data.
The OGS/OPS programs are independent. standalone software applications developed by Cedars-Sinai Medical Center for the display and quantification of cardiac SPECT data. The programs will run on the computer systems with a PC architecture, the Microsoft® Windows® operating systems and a third party X-server software.
The QGS/QPS programs take reconstructed tomographic slices of the left ventricle generated from gated and/or non-gated cardiac SPECT studies and display the images along with automatically generated quantification.
QPS analyzes myocardial perfusion by quantifying defect extent and severity using gender and isotope-specific normal limits. 2D and 3D perfusion maps are automatically generated.
QGS analyzes myocardial function by quantifying global and regional ejection fraction, wall thickening, and left ventricular volume at end-diastole and end-systole. 2D and 3D images of perfusion and thickening are generated.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Digirad QGS and QPS software:
The provided document is a 510(k) summary for the Digirad QGS (Quantitative Gated SPECT) and QPS (Quantitative Perfusion SPECT) software. It focuses on demonstrating substantial equivalence to predicate devices rather than establishing novel safety and effectiveness through a comprehensive set of acceptance criteria and a dedicated study designed purely for that purpose. Instead, it relies on functionality tests and clinical validation performed previously by Cedars-Sinai Medical Center.
Based on the provided text, here's the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Functionality | Each software application functioned as per its specifications. |
Platform Compatibility | QGS and QPS programs ran on computer systems with the proposed platform using a standard Cedars-Sinai test case. |
Accuracy/Matching Previous Results | The test passed with the actual results matching the Cedars-Sinai results. |
Note: The document does not explicitly state numerical acceptance criteria (e.g., minimum sensitivity, specificity, or correlation thresholds). The "acceptance criteria" here are implied by the successful completion of the functionality and platform tests and the agreement with Cedars-Sinai results.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated for the "standard Cedars-Sinai test case" used for platform testing.
- Data Provenance:
- The "standard Cedars-Sinai test case" used for platform testing originated from Cedars-Sinai Medical Center.
- The clinical validation was conducted by Cedars-Sinai Medical Center and published in the Journal of Nuclear Medicine. The location and retrospective/prospective nature of the data for this clinical validation are not specified in this 510(k) summary itself, but it can be inferred that it would be patient data, likely retrospective, used in the published study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- The document does not provide details on the number or qualifications of experts specifically for establishing the ground truth of the "standard Cedars-Sinai test case" used for platform compatibility. It merely states the test passed, meaning the device's output matched "Cedars-Sinai results," implying a reference standard already established by Cedars-Sinai.
- For the clinical validation study mentioned, this information would be found in the Journal of Nuclear Medicine publication, but it is not included in this 510(k) summary.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly detailed. For the platform test, it seems to be a direct comparison of the device's output against pre-established "Cedars-Sinai results." This doesn't inherently suggest an adjudication process by multiple experts for the test case itself, but rather a verification against an already determined ground truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC Comparative Effectiveness Study was explicitly conducted or reported in this 510(k) summary. The submission emphasizes standalone software functionality and clinical validation, not reader performance with or without AI assistance. The device is for "display and quantification," which assists human readers but the document doesn't measure this assistance effect size.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance was done. The functionality tests and the platform compatibility test demonstrate the algorithm's performance independent of a human-in-the-loop. The QGS/QPS programs themselves are described as "independent, standalone software applications." Their output is compared directly to specifications and known results, indicating standalone operation.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For the platform compatibility test, the ground truth was "Cedars-Sinai results." This suggests a pre-established reference standard, likely derived from expert analysis of the medical images or prior validated software results for that specific test case.
- For the clinical validation referenced, the ground truth would typically be established based on accepted clinical standards for myocardial perfusion and function analysis, which could include expert interpretation, follow-up data, or comparison to other validated diagnostic methods. This detail is not in the 510(k) summary, but in the cited publication.
8. The Sample Size for the Training Set
- Not provided. The software was "developed by Cedars-Sinai Medical Center," implying extensive use of their data for development and internal validation, but no specific training set size is mentioned in this document.
9. How the Ground Truth for the Training Set was Established
- Not provided. Given the development by a medical center, it's highly probable the ground truth for any training data used would have been established by clinical experts (e.g., cardiologists, nuclear medicine specialists) at Cedars-Sinai, often through expert consensus, review of patient outcomes, or comparison to other diagnostic modalities. However, the 510(k) summary does not detail this.
Ask a specific question about this device
Page 1 of 1