Search Results
Found 1 results
510(k) Data Aggregation
(21 days)
Syngo Carbon Space is a software intended to display medical data and to support the review and analysis of medical images by trained medical professionals.
Syngo Carbon Space "Diagnostic Workspace" is indicated for display, rendering, post-processing of medical data (mostly medical images) within healthcare institutions, for example, in the field of Radiology, Nuclear Medicine and Cardiology.
Syngo Carbon Space "Physician Access" is indicated for display and rendering of medical data within healthcare institutions.
Syngo Carbon Space is a software only medical device which is intended to be installed on recommended common IT Hardware. The hardware is not seen as part of the medical device. Syngo Carbon Space is intended to support reviews and analysis of medical images by trained medical practitioners. The software is used in Radiology for reading images and throughout the healthcare institutions for image & result distribution.
Syngo Carbon Space is a medical device, provided in two variants/options.
- Diagnostic Workspace (Fat/Thick Client) -
- -Physician Access (Thin/Web Client)
In any scenario, both the options can be installed/run on the same machine and be used simultaneously.
Syngo Carbon Space Diagnostic Workspace provides a reading workspace for Radiology that supports display of medical image data & documents and connects intelligent work tools (diagnostic and non-diagnostic software elements) to enable easy access to the data needed, easy access to external tools and creation of actionable results.
Syngo Carbon Space Physician Access provides a zero-footprint web application for enterprise-wide viewing of DICOM, non-DICOM, multimedia data and clinical documents to facilitate image and result distribution in the healthcare institution.
The provided text is a 510(k) summary for the Syngo Carbon Space (VA30A) device, a medical image management and processing system. The core of this submission is to demonstrate substantial equivalence to a predicate device (Syngo Carbon Space VA20A).
However, the provided text does not contain the detailed clinical or non-clinical performance test results with acceptance criteria and reported performance values that would typically be presented in a table. It states that "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." and "There are no changes to the algorithm and its performance that requires a new bench testing for the subject device. The results/summary from the predicate device is still applicable for the subject device."
Therefore, I cannot populate the table of acceptance criteria and reported device performance from the provided text directly. The document focuses on demonstrating that the new version of the software (VA30A) maintains the same safety and effectiveness as the previous version (VA20A) by highlighting identical intended use, indications for use, contraindications, and core functionalities. It also details minor enhancements to existing features (like measurement tools and patient jacket functionality) and updates to supported operating systems and browsers, which were validated through non-clinical performance testing.
Here's a breakdown of the requested information based on the provided text, with explicit notes on what is NOT available:
1. A table of acceptance criteria and the reported device performance
This information is NOT explicitly detailed in the provided document. The document states that "all performance testing was conducted in a non-clinical fashion as part of verification and validation activities" and "The testing results support that all the software specifications have met the acceptance criteria." However, it does not enumerate specific acceptance criteria or the quantitative results of these non-clinical tests for the VA30A or its predicate beyond a high-level statement of conformance.
The document emphasizes that there are "no changes to the algorithm and its performance that requires a new bench testing for the subject device. The results/summary from the predicate device is still applicable for the subject device." This implies that the performance demonstrated by the predicate device (VA20A) is considered valid for the subject device (VA30A), but the specific performance metrics and their acceptance criteria are not provided in this summary.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: NOT specified. The document mentions "non-clinical performance testing" and "verification and validation activities" but does not provide details on the sample size of data or tests used for these validations.
- Data Provenance: NOT specified. Given that no clinical studies were performed, the data would have been synthetic or from internal testing environments. The origin (country) or nature (retrospective/prospective) of any test data is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- NOT applicable/specified. Since no clinical studies were performed and the testing was non-clinical and focused on software verification and validation, there is no mention of experts establishing ground truth for a clinical test set. The validation would have been against pre-defined software requirements or simulated scenarios.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- NOT applicable/specified. As no clinical studies or reader studies are reported, there's no mention of an adjudication method for ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was NOT done. The document explicitly states: "No clinical studies were carried out for the product". The device is a "Medical Image Management and Processing System" and doesn't appear to include AI-assisted diagnostic capabilities (the text mentions "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel.").
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, standalone non-clinical testing was performed for software verification. While not described as an "algorithm only" performance study in the context of diagnostic accuracy, the entire submission is based on "non-clinical performance testing" and "verification and validation activities" to ensure the software functions as intended and meets specifications. The document states: "Performance tests were conducted to test the functionality of the device Syngo Carbon Space. These tests have been performed to assess the functionality of the subject device."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- NOT applicable/specified in the context of clinical ground truth. For non-clinical software testing, ground truth would be established by specified software requirements, functional correctness, performance benchmarks (e.g., speed, data integrity, display accuracy against known inputs), and adherence to standards (DICOM, HL7). The document doesn't detail how specific "ground truths" were established for these technical tests.
8. The sample size for the training set
- NOT applicable/specified. As this is not a submission for an AI/ML algorithm that requires a training set for model development, there is no mention of a training set size. The device is a software system for image management and processing.
9. How the ground truth for the training set was established
- NOT applicable/specified. (See point 8)
Ask a specific question about this device
Page 1 of 1