Search Results
Found 1 results
510(k) Data Aggregation
(196 days)
The SOMATOM Emotion and SOMATOM Sensation family CT systems are intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from either the same axial plane taken at different angles or spiral planes* taken at different angles.
(*spiral planes: the axial planes resulted from the continuous rotation of detectors and x-ray tube, and the simultaneous translation of the patient.)
syngo® CT 2014A (SOMARIS/5 VB42) is a new scanning software version for the following, already commercially available, systems:
- SOMATOM Emotion 6 CT System
- SOMATOM Emotion 16
- SOMATOM Project P30F/Sensation 64
- SOMATOM Project P30L/Sensation Open
- SOMATOM P30 CT/Sensation .
The system software is a command-based program used for patient management, data management, X-ray scan control, image reconstruction, and image archive/evaluation. This new software version, syngo® CT 2014A (SOMATOM Emotion & SOMARIS/5 VB42), supports bug fixing and an upgrade for the existing commercially available SOMATOM Sensation family CT systems for better maintenance.
The SOMATOM Emotion and SOMATOM Sensation family CT systems are whole body X-ray Computed Tomography Systems. They produce CT images in DICOM format, which can be used by post-processing applications commercially distributed by Siemens and other vendors.
The document provided does not contain a study comparing the device performance against acceptance criteria in the way described in your request (e.g., specific metrics like sensitivity, specificity, or AUC, with corresponding acceptance thresholds).
Instead, the document is a 510(k) premarket notification for a software update to existing CT systems. It primarily focuses on demonstrating substantial equivalence to previously cleared devices and adherence to relevant industry standards for safety and performance.
Here's a breakdown of the information that is present or can be inferred:
1. A table of acceptance criteria and the reported device performance:
This specific table is not provided in the document. The document states:
- "The testing results support the conclusion that all of the software specifications have met the acceptance criteria." (Page 6)
- However, it does not detail what those specific acceptance criteria were or present a table of results against them. The non-clinical testing section lists adherence to various standards (ISO 14971, IEC 62304, IEC 60601 series, NEMA DICOM, NEMA XR-25). These standards themselves contain performance requirements that would serve as acceptance criteria, but no specific performance metrics like sensitivity or specificity are discussed in the context of disease detection.
2. Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Not explicitly mentioned. The document refers to "verification/validation as well as phantom testing" (Page 6) but does not provide details on the number of cases or phantoms used for these tests.
- Data Provenance: Not applicable in the context of clinical image data for evaluation. The testing involved "phantom testing" (Page 6).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable, as no clinical study involving human expert ground truth for specific pathologies or conditions is described. The testing focused on technical performance and adherence to engineering standards.
4. Adjudication method for the test set:
- Not applicable, as no clinical study requiring adjudication of expert readings is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was done. This document describes a software update for CT systems, not an AI-assisted diagnostic tool. Therefore, there's no discussion of human reader improvement with AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, in a way, standalone (algorithm only) testing was done for the software components. The non-clinical testing described involves "verification/validation as well as phantom testing" (Page 6). This would assess the software's performance (e.g., image reconstruction quality, dose management features) independently, but it's not a standalone diagnostic algorithm in the sense of detecting or classifying disease.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the technical tests would be the expected or reference values from the phantoms used and the specifications derived from the relevant engineering standards (e.g., image quality metrics, dose accuracy measurements). Not expert consensus or pathology.
8. The sample size for the training set:
- Not applicable. This is a software update for existing CT systems, not a machine learning model that requires a "training set" in the conventional sense of AI.
9. How the ground truth for the training set was established:
- Not applicable, as there is no training set mentioned for an AI model.
In summary:
This document is a regulatory submission for a software update to a Computed Tomography (CT) system. It addresses the technical performance and safety of the CT system's software, rather than the diagnostic performance of an AI algorithm in detecting specific conditions. The acceptance criteria and "study" are focused on demonstrating that the software update maintains the safety and effectiveness of the device and conforms to established medical device standards and regulations. The "performance" therefore refers to the system's ability to produce images and manage data according to its specifications and relevant standards, not its ability to detect clinical findings.
Ask a specific question about this device
Page 1 of 1