(24 days)
syngo.CT CaScoring is an image analysis software package for evaluating CT data sets.
The software is designed to support the physician in evaluating and documenting calcified coronary lesions, using standard or low-dose spiral or sequential CT scanning data sets. After loading noncontrasted cardiac CT images, syngo. CT CaScoring can be used to mark calcified coronary lesions and to allocate each lesion to one of several coronary arteries, that is, the right coronary artery (RCA), the left main coronary artery (LM), the left anterior descending artery (LAD), and the left circumflex artery (CX).
syngo. CT CaScoring calculates the Agatston equivalent score, the mass score and the volume score of each coronary artery as well as the corresponding total scores all coronary arteries. syngo.CT CaScoring allows the user to create a paper report including the calcium scoring data, any user-documented images, cited literature and additional relevant information.
The post-processing application syngo.CT CaScoring SOMARIS/8 VB50 is designed to support the physician in evaluating and documenting calcified coronary lesions. After loading non-contrasted cardiac CT images, syngo.CT CaScoring can be used to interactively mark calcified coronary lessions and to allocate each lesion to one of several coronary arteries, that is, the right coronary artery (RCA), the left main coronary artery (LM), the left anterior descending artery (LAD), and the left circumflex artery (CX). syngo.CT CaScoring calculates the Agatston-equivalent score, the mass score and the volume score of each coronary artery as well as the corresponding total scores across all coronary arteries. syngo.CT CaScoring allows the user to create a paper report including the calcium scoring data, any user-documented images, cited literature and additional relevant information.
For the current software version SOMARIS/8 VB50 one major and one minor change have been implemented:
- . Since the last 510(k) clearance of the predicate device (syngo.CT Calcium Scoring SOMARIS/8 VB40, K192763, clearance date 12/17/2019) the algorithm to precompute the calcium score has been enhanced and extended. In the subject device, the CaScoring algorithm was extended to label coronary calcifications as belonging to either the left main, left anterior descending, left circumflex or right coronary artery.
- . This version contains UI (user-interface) modifications.
The provided text outlines the performance evaluation of the syngo.CT CaScoring
algorithm. Although it mentions a "bench test" and a "reader study," the details regarding the acceptance criteria and the studies are somewhat general. Based on the information available, here's a structured description:
Acceptance Criteria and Device Performance Study
The syngo.CT CaScoring
device aims to automatically score and assign coronary calcifications to specific coronary arteries. The enhanced algorithm's performance was evaluated through a "bench test" and a "reader study."
1. Table of Acceptance Criteria and Device Performance
Acceptance Criteria Category | Specific Criteria | Reported Device Performance/Conclusion |
---|---|---|
Bench Test Performance | Overall "adequate and acceptable performance" for total Agatston-equivalent score and classification into corresponding Agatston score categories. | "The summary of the bench test is that an adequate and acceptable performance of the automatic scoring algorithm was found for the total Agatston-equivalent score and the classification into the corresponding Agatston score categories, which are the aspects of calcium scoring that have a well-established impact on management recommendations. All pre-specified acceptability criteria were passed." |
Reader Study Performance | Meeting "all prespecified acceptability thresholds." Comparison of performance for vessel-specific calcium scores. | "The conclusion of the ready study is that all prespecified acceptability thresholds were met by the results of this study." The study found "no statistically relevant difference between the performance of the three individual readers compared to their consensus, and the algorithm compared to the consensus" regarding vessel-specific assignments. It notes "less but still significant deviation of the automatic LM scores from the consensus annotations compared to the bench test." The overall pattern between the bench test and reader study populations is described as "very comparable." The performance of assigning calcifications to individual vessels is deemed "adequate and acceptable," especially considering the limited clinical relevance of vessel-specific calcium scores and the similar difficulty for human readers. |
Algorithm Execution | Successful execution on all testing datasets. | "The algorithm was successfully executed on all testing datasets. No data has been excluded from the analysis." |
Data Diversity (Bench Test) | Not explicitly stated as a criterion, but mentioned as a characteristic aiding representativeness. | The bench test population is described as "considerably more diverse" than the reader study population. |
Representativeness (Reader Study) | Not explicitly stated as a criterion, but a conclusion made about the study's generalizability. | "The reader study population is from a single, modern scanner. The overall statistics on the performance of the automatic scoring algorithm demonstrate a good comparability with the bench test population, which is considerably more diverse. ... Thus, Siemens concludes that the results of the reader study are representative for the general performance of the algorithm." |
2. Sample Size and Data Provenance
- Test Set Sample Size:
- Bench Test: Not explicitly stated, but described as "considerably more diverse" than the reader study population, implying a larger and/or more varied dataset.
- Reader Study: Not explicitly stated, but mentioned to be "from a single, modern scanner."
- Data Provenance: The document does not specify the country of origin of the data. It also does not explicitly state whether the data was retrospective or prospective. However, for a bench test and reader study evaluating an enhanced algorithm, it is common to use retrospective data.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: The reader study involved "three individual readers."
- Qualifications of Experts: Not explicitly stated within the provided text. It is generally assumed that "readers" in a medical imaging context are qualified medical professionals like radiologists or cardiologists.
4. Adjudication Method for the Test Set
- The reader study mentions comparing the performance of "the three individual readers compared to their consensus." This strongly implies a consensus-based adjudication method for establishing the ground truth. A common consensus method is a (N-1) rule or a majority vote among multiple experts (e.g., 2 out of 3, 3 out of 4). The text doesn't specify the exact method (e.g., 2+1 or 3+1), but the use of "their consensus" indicates that the three readers arrived at an agreed-upon ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a multi-reader multi-case (MRMC) study was implicitly part of the "reader study." The study involved "three individual readers" and compared their performance (as part of a consensus group) against the algorithm.
- Effect Size: The text states: "No statistically relevant difference between the performance of the three individual readers compared to their consensus, and the algorithm compared to the consensus was found." This suggests that the AI assistance (or the algorithm's standalone performance compared to human consensus) did not statistically significantly improve or worsen the performance for the specific metrics evaluated, particularly for vessel-specific assignments where the task was similarly difficult for humans and the algorithm. It does not provide a specific quantitative effect size (e.g., AUC increase, accuracy percentage point change) for human reader improvement with versus without AI assistance. The focus seems to be on whether the algorithm performs comparably to expert consensus.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance evaluation was done. The "bench test" assessed the "adequate and acceptable performance of the automatic scoring algorithm" independently. The "reader study" also compared the algorithm's performance directly against the human consensus, indicating an assessment of its standalone capabilities.
7. Type of Ground Truth Used
- The ground truth for the test sets (both bench test and reader study) was established by expert consensus annotations. The text explicitly mentions "consensus annotations" for comparing automatically assigned LM scores and for comparing reader performance.
8. Sample Size for the Training Set
- The sample size for the training set is not specified in the provided text. It only mentions that the "automatic scoring algorithm was retrained on re-annotated data as part of the vessel assignment extension."
9. How Ground Truth for Training Set was Established
- The ground truth for the training set was established through re-annotation. The text states, "the automatic scoring algorithm was retrained on re-annotated data." While the specifics of who performed the re-annotation (e.g., experts, number of experts, adjudication) are not detailed, it implies data that underwent a human review and labeling process to serve as the training truth.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.