Search Results
Found 16 results
510(k) Data Aggregation
(122 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, and that displays, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
MAGNETOM Flow.Ace and MAGNETOM Flow.Plus are 60cm-bore MRI systems with quench pipe-free, sealed magnets utilizing DryCool technology. They are equipped with BioMatrix technology and run on Siemens' syngo MR XA70A software platform. The systems include Eco Power Mode for reduced energy and helium consumption. They have different gradient configurations suitable for all body regions, with stronger configurations supporting advanced cardiac imaging. Compared to the predicate device, new hardware includes a new magnet, gradient coil, RF system, local coils, patient tables, and computer systems. New software features include AutoMate Cardiac, Quick Protocols, BLADE with SMS acceleration for non-diffusion imaging, Deep Resolve Swift Brain, Fast GRE Reference Scan, Ghost reduction, Fleet Reference Scan, SMS Averaging, Select&GO extension, myExam Spine Autopilot, and New Startup-Timer. Modified features include improvements for Pulse Sequence Type SPACE, improved Gradient ECO Mode Settings, and Inline Image Filter switchable for users.
The provided 510(k) clearance letter and summary describe the acceptance criteria and supporting studies for the MAGNETOM Flow.Ace and MAGNETOM Flow.Plus devices, particularly focusing on their AI features: Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The document uses quality metrics like PSNR, SSIM, and NMSE as indicators of performance and implicitly as acceptance criteria. Visual inspection and clinical evaluations are also mentioned.
Feature | Quality Metrics (Acceptance Criteria) | Reported Performance (Summary) |
---|---|---|
Deep Resolve Boost | Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) | Most metrics passed. |
Deep Resolve Sharp | PSNR, SSIM, Perceptual Loss, Visual Rating, Image sharpness evaluation by intensity profile comparisons | Verified and validated by in-house tests, including visual rating and evaluation of image sharpness. |
Deep Resolve Swift Brain | PSNR, SSIM, Normalized Mean Squared Error (NMSE), Visual Inspection | After successful passing of quality metrics tests, work-in-progress packages were delivered and evaluated in clinical settings with collaboration partners. Potential artifacts not well-captured by metrics were detected via visual inspection. |
2. Sample Sizes Used for the Test Set and Data Provenance
The document uses "Training and Validation data" and often refers to the datasets used for both. It is not explicitly stated what percentage or how many cases from these datasets were strictly reserved for a separate "test set" and what came from the "validation sets." However, given the separation in slice count, the "Validation" slices for Deep Resolve Swift Brain might be considered the test set.
- Deep Resolve Boost:
- TSE: >25,000 slices
- HASTE: >10,000 HASTE slices (refined)
- EPI Diffusion: >1,000,000 slices
- Data Provenance: Retrospectively created from acquired datasets. Data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength.
- Deep Resolve Sharp:
- Data: >10,000 high resolution 2D images
- Data Provenance: Retrospectively created from acquired datasets. Data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength.
- Deep Resolve Swift Brain:
- 1.5T Validation: 3,616 slices (This functions as a test set for 1.5T)
- 3T Validation: 6,048 slices (This functions as a test set for 3T)
- Data Provenance: Retrospectively created from acquired datasets.
The document does not explicitly state the country of origin for the data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number or qualifications of experts used to establish the ground truth for the test sets. For Deep Resolve Swift Brain, it mentions "evaluated in clinical settings with collaboration partners," implying clinical experts were involved in the evaluation, but details are not provided. For Boost and Sharp, the "acquired datasets...represent the ground truth," suggesting the raw imaging data itself, rather than expert annotations on that data, served as ground truth.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1). For Deep Resolve Swift Brain, it mentions "visually inspected" and "evaluated in clinical settings with collaboration partners," suggesting a qualitative assessment, but details on consensus or adjudication are missing.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A formal MRMC comparative effectiveness study demonstrating human reader improvement with AI vs. without AI assistance is not described in the provided text. The studies focus on the AI's standalone performance in terms of image quality metrics and internal validation.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Yes, standalone performance was done for the AI features. The "Test Statistics and Test Results Summary" for Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain describe the evaluation of the algorithm's output using quantitative metrics (PSNR, SSIM, NMSE) and visual inspection against reference standards, which is characteristic of standalone performance evaluation.
7. The Type of Ground Truth Used
For Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain, the ground truth used was the acquired high-quality datasets themselves. The input data for training and validation was then retrospectively created from this ground truth by manipulating or augmenting it (e.g., undersampling k-space, adding noise, cropping, using only the center part of k-space). This means the original, higher-quality MR images or k-space data served as the reference for what the AI models should reconstruct or improve upon.
8. The Sample Size for the Training Set
- Deep Resolve Boost:
- TSE: >25,000 slices
- HASTE: pre-trained on the TSE dataset and refined with >10,000 HASTE slices
- EPI Diffusion: >1,000,000 slices
- Deep Resolve Sharp: >10,000 high resolution 2D images
- Deep Resolve Swift Brain: 20,076 slices
9. How the Ground Truth for the Training Set Was Established
For Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain, the "acquired datasets (as described above) represent the ground truth for the training and validation." This implies that high-quality, fully acquired MRI data was considered the ground truth. The input data used during training (e.g., undersampled, noisy, or lower-resolution versions) was then derived or manipulated from this original ground truth. Essentially, the "ground truth" was the optimal, full-data acquisition before any degradation was simulated for the AI's input.
Ask a specific question about this device
(135 days)
This computed tomography system is intended to generate and process cross-sectional images of patients by computer reconstruction of x-ray transmission data.
The images delivered by the system can be used by a trained staff as an aid in diagnosis and treatment as well as for diagnostic and therapeutic interventions.
This CT system can be used for low dose lung cancer screening in high risk populations*.
- As defined by professional medical societies. Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The subject device NAEOTOM Alpha with software version SOMARIS/10 syngo CT VB10 is a Computed Tomography X-ray system which features two continuously rotating tube-detector systems, denominated as A- and B-systems respectively (dual source CT scanner system). The detectors' function is based on photon-counting technology. The NAEOTOM Alpha with SOMARIS/10 syngo CT VB10 produces CT images in DICOM format, which can be used by trained staff for postprocessing applications commercially distributed by Siemens and other vendors as an aid in diagnosis and treatment as well as for diagnostic and therapeutic interventions. The computer system delivered with the CT scanner is able to run optional post-processing applications.
The platform software for the NAEOTOM Alpha is syngo CT VB10 (SOMARIS/10 syngo CT VB10). It is a command-based program used for patient management, data management, X-ray scan control, image reconstruction, and image archive/evaluation.
The software platform provides plugin software interfaces that allow for the use of specific commercially available post-processing software algorithms in an unmodified form from the cleared stand-alone post-processing version.
The provided text describes the Siemens NAEOTOM Alpha CT Scanner System with software version SOMARIS/10 syngo CT VB10. While it details extensive non-clinical testing and verification/validation activities, it does not include acceptance criteria for specific AI/software performance metrics (e.g., sensitivity, specificity, accuracy) nor does it describe a comparative clinical study (MRMC or standalone AI performance) with human readers or clinical outcomes. The submission focuses on demonstrating substantial equivalence to predicate devices through engineering verification and validation of new and modified features, rather than a clinical performance study of an AI-powered diagnostic aid.
Therefore, many of the requested details regarding AI performance acceptance criteria and a study proving device performance in a clinical AI context are not present in the provided document. The document primarily describes the general engineering and regulatory testing performed for a CT system and its software updates.
However, I can extract information related to the non-clinical testing performed to support the modifications, as well as the types of ground truth used where applicable.
Here's an attempt to answer the questions based only on the provided text, highlighting where information is absent:
1. A table of acceptance criteria and the reported device performance
The document defines "acceptance criteria" generally as part of the system validation tests (workflow and user manual tests, legal and regulatory tests) and system verification tests (system integration, functionality verification, image quality evaluation). For specific new/modified features, the acceptance criteria are generally qualitative or comparative relative to the predicate device, demonstrating comparability or improvement.
Feature/Non-clinical Supportive Testing | Acceptance Criteria (Implicit/Explicit) | Reported Device Performance and Conclusion |
---|---|---|
FAST 3D Camera / FAST Integrated Workflow | Accuracy of sub-features (FAST Isocentering, FAST Range, FAST Direction) should be comparable to the predicate device with syngo CT VA50. | "The FAST Isocentering accuracy of the subject device with syngo CT VB10 is comparable to the predicate device with syngo CT VA50, regardless of the camera mounting position." |
"For the FAST Range feature, the detection accuracy of all body region boundaries is comparable between the subject device with syngo CT VB10 and predicate device with syngo CT VA50." | ||
"The FAST Direction pose detection results are of comparable accuracy for subject and predicate device, regardless of the camera mounting position." | ||
"Overall, the SOMARIS/10 syngo CT VB10 delivers comparable accuracy to the SOMARIS/10 syngo CT VA50 predicate for the new FAST 3D Camera hardware." | ||
Multi-Purpose Table (Vitus) | Provides sufficient freedom of movement for a mobile C-arm X-ray system to be used for clinical routine without any significant limitations for my needle Laser or 3D Camera. | "Based on the test results it can be concluded that a CT scanner, equipped with a Multi-Purpose (Vitus) Patient Table, which is installed with enhanced distance (674 mm) to the CT gantry and offers the iCT mode functionality, provides sufficient freedom of movement for a mobile C-arm X-ray system to be used for clinical routine without any significant limitations for my needle Laser or 3D Camera." |
ZeeFree (Cardiac Stack Artefact Correction) | 1. Reduction of stack misalignment artifacts (discontinuities in vessel structures, anatomical steps, doubling of anatomy). |
- No new artifacts introduced.
- Equivalent image quality in quantitative standard physics phantom-based measurements (noise, homogeneity, high-contrast resolution, slice thickness, CNR) compared to non-corrected standard reconstruction.
- Equivalent image quality in quantitative and qualitative phantom-based measurements with respect to metal objects.
- Algorithm can be successfully applied to phantom data from motion phantom. | "If misalignment artefacts are identified in non-corrected standard ECG-gated reconstructed sequence or spiral images, the feature 'Cardiac Stack Artefact Correction' (SAC, marketing name: ZeeFree) enables optional stack artefact corrected images, which reduce the number of alignment artefacts."
"The SAC reconstruction does not introduce new artefacts, which were previously not present in the non-corrected standard reconstruction."
"The SAC reconstruction does realize equivalent image quality in quantitative standard physics phantom-based measurements (ACR, Gammex phantom) in terms of noise, homogeneity, high-contrast resolution, slice thickness and CNR compared to a non-corrected standard reconstruction."
"The SAC reconstruction does realize equivalent image quality in quantitative and qualitative phantom-based measurements with respect to metal objects compared to a non-corrected standard reconstruction."
"The SAC algorithm can be successfully applied to phantom data if derived from a suitable motion phantom demonstrating its correct technical function on the tested device." |
| myNeedle Guide (with myNeedle Detection) | 1. Accuracy of automatic needle detection algorithm. - Reduction of necessary user interactions for navigating to a needle-oriented view. | "It has been shown that the algorithm was able to consistently detect needle-tips over a wide variety of scans in 90.76% of cases."
"Further, the results of this bench test clearly shows that the auto needle detection functionality reduces the number of interactions steps needed to generate a needle-aligned view in the CT Intervention SW. Zero user interactions are required and a needle-aligned view is displayed right away after a new scan, if auto needle detection is switched on in the SW configuration." |
| Quantum Spectral Imaging | 1. T3D reconstructions in Quantumpeak mode possible with sharpest available kernels. - Quantumpeak scan mode allows monoenergetic images from 40 to 190 keV.
- Monoenergetic reconstructions free of artifacts.
- Measured CT values precisely match reference values.
- Accuracy of monoenergetic reconstructions in iodine and calcium inserts comparable or better than secondary predicate. | "The results showed that: with T3D reconstructions from Quantumpeak scan modes, high resolution images with sharp kernel up to Br98 are obtained. The resolution is comparable to other Highresolution scan modes of the NAEOTOM Alpha."
"Monoenergetic reconstructions from Quantumpeak scan modes are free of artifacts. Measured CT values precisely match the reference values."
"The accuracy of monoenergetic reconstructions in iodine and calcium inserts at the NAEOTOM Alpha is comparable or better than on the secondary predicate device SOMATOM Force." |
| Quantum HD Cardiac | Substantial equivalence in image quality (UHR and standard resolution spectral images) compared to single source spectral capable 120x0.2mm UHR scan mode. | "Based on the results it can be concluded that substantial equivalence in image quality is achieved by the images derived from the spectral capable cardiac acquisition mode 96x0.2mm for both, the high-resolution UHR and the standard resolution spectral image cases, compared to the single source spectral capable 120x0.2mm UHR scan mode." |
| HD FoV (High Definition Field of View) | HU accuracy in extended field of view region. | "In the phantom study, an HU value accuracy of about +/- 40 HU was achieved with skin-line accuracy of about +/- 3 mm."
"HD FoV enables the reconstruction of images while significantly improving the visualization of anatomy in the regions outside the scan field of view of 50 cm." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified in terms of patient counts for clinical validation, as this document focuses on non-clinical (phantom and bench) testing. For the "myNeedle Guide" feature, it states "a wide variety of scans" were used, and the success rate was "90.76% of cases" but doesn't quantify the number of cases. Phantom studies are mentioned for other features without specific numbers of scans/readings.
- Data Provenance: The testing described is non-clinical (phantom, bench, and system integration/verification testing). There is no mention of human patient data or its country of origin. The manufacturing site is Siemens Healthcare GmbH in Forchheim, Germany, implying the testing likely occurred there.
- Retrospective or Prospective: Not applicable as it's non-clinical testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
- Role of Experts: For the non-clinical tests, "ground truth" generally refers to the known physical properties of the phantoms or the expected performance based on engineering specifications. Human experts are mentioned as trained staff who would use the device, but not as part of a formal ground truth establishment process for the performance studies presented.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable, as no multi-reader human-based test set or clinical study is described. The assessment of performance is based on measurements from phantoms and internal engineering verification.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC comparative effectiveness study is mentioned. The document focuses on demonstrating that new/modified features of the CT system are comparable to or improve upon the predicate device through non-clinical testing. The device is a CT system, not an AI-powered diagnostic assist that would typically be evaluated with MRMC. The "myNeedle Detection" feature is a software algorithm within the CT system to aid in procedures, but its evaluation is described as a bench test of its detection accuracy and reduction of user interaction steps, not an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The performance data provided is for the CT system and its integrated software features, including algorithms like ZeeFree and myNeedle Detection. The "algorithm only" performance is implicitly covered in the bench testing of these features (e.g., myNeedle Detection achieving 90.76% detection accuracy). However, this is not presented as a "standalone AI" product in the sense of a distinct AI diagnostic algorithm being submitted for clearance. It's an integrated feature of the CT system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For the non-clinical tests described, the ground truth is primarily based on:
- Known physical properties of phantoms: Used for evaluating image quality metrics (noise, homogeneity, resolution, CNR, HU accuracy, etc.) and quantitative measurements.
- Engineering specifications and expected functional behavior: For features like FAST 3D Camera accuracy and Multi-Purpose Table movement.
- Reference values: For monoenergetic reconstructions in Quantum Spectral Imaging.
- Manual verification/observation: For testing user interaction steps in myNeedle Guide.
8. The sample size for the training set
- Training Set Sample Size: Not specified. The document does not describe the development or training of AI models. It focuses on the verification and validation of specific software features within the CT system.
9. How the ground truth for the training set was established
- Training Set Ground Truth Establishment: Not specified, as no training set or AI model development is described in this document.
Ask a specific question about this device
(232 days)
The MAGNETOM system is indicated for use as a magnetic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, and that displays the internal structure and/or function of the head or extremities. Other physical parameters derived from the images may also be produced. Additionally, the MAGNETOM system is intended to produce Sodium images for the head and Phosphorus spectroscopic images and/or spectra for whole body, excluding the head. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
MAGNETOM Terra and MAGNETOM Terra.X with software syngo MR XA60A include new and modified hardware and software compared to the predicate device, MAGNETOM Terra with software syngo MR E12U. A high level summary of the new and modified hardware and software is provided below: Hardware: New Hardware (Combiner (pTx to sTx), MC-PALI, GSSU control unit, 8Tx32Rx Head coil), Modified Hardware (Main components such as: Upgrade of GPA, New Host computer hardware, New MaRS computer hardware, Upgrade the SEP, The new shim cabinet ASC5 replaces two ACS4 shim cabinets; Other components such as: RFPA, Use of a common MR component which provides basic functionality that is required for all MAGNETOM system types, The multi-nuclear (MNO) option has been modified, OPS module, Cover with UI update on PDD). Software: New Features and Applications (Static B1 shimming, TrueForm (1ch compatibility mode), Deep Resolve Boost, Deep Resolve Gain, Deep Resolve Sharp, Bias field correction (marketing name: Deep RxE), The new BEAT pulse sequence type, BLADE diffusion, The PETRA pulse sequence type, TSE DIXON, The Compressed Sensing (CS) functionality is now available for the SPACE pulse sequence type, The Compressed Sensing (CS) functionality is now available for the TFL pulse sequence type, IDEA, The Scientific Suite), Modified Features and Applications (EP2D DIFF and TSE with SliceAdjust, The Turbo Flash (TFL)), Modified Software / Platform (Stimulation monitoring, "dynamic research labeling"), Other Modifications and / or Minor Changes (Intended use, SAR Calculation and Weight limit reduction for 31P/1H TxRx Flex Loop Coil, X-upgrade for MAGNETOM Terra to MAGNETOM Terra.X, Provide secure MR scanner setup for DoD (Department of Defense) -Information Assurance compliance).
The provided text describes the acceptance criteria and supporting study for the AI features (Deep Resolve Boost, Deep Resolve Sharp, and Deep RxE) within the MAGNETOM Terra and MAGNETOM Terra.X devices.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
AI Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Deep Resolve Boost | Characterization by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Visual inspection to ensure potential artifacts are detected. Successful passing of quality metrics tests. Work-in-progress packages delivered and evaluated in clinical settings. (Implicit: No misinterpretation, alteration, suppression, or introduction of anatomical information, and potential for faster image acquisition and significant time savings). | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Additionally, images were inspected visually to ensure that potential artifacts are detected that are not well captured by the metrics listed above. After successful passing of the quality metrics tests, work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners. In a total of seven peer-reviewed publications, the investigations covered various body regions (prostate, abdomen, liver, knee, hip, ankle, shoulder, hand, and lumbar spine) on 1.5T and 3T systems. All publications concluded that the work-in-progress package and the reconstruction algorithm can be beneficially used for clinical routine imaging. No cases have been reported where the network led to a misinterpretation of the images or where anatomical information has been altered, suppressed, or introduced. Significant time savings are reported. |
Deep Resolve Sharp | Characterization by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. Verification and validation by in-house tests including visual rating and evaluation of image sharpness by intensity profile comparisons. (Implicit: Increased edge sharpness). | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by in-house tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstruction with and without Deep Resolve Sharp. Both tests show increased edge sharpness. |
Deep RxE | 1. During training, the loss (difference to ground truth) is monitored, and the training step with the lowest test loss is taken as the final trained network. |
- Automated unit-tests are set up to test the consistency of the generated output to a previously defined reference output.
- During verification, the performance of the network is tested on a phantom against the ground truth with a maximal allowed NRMSE of 11% (for 2D network) and 8.7% (for 3D network).
- The trained final network was used in the clinical study. (Implicit: Increases image homogeneity in a reproducible way on the receive profile, and images acquired with Deep RxE are rated better for image quality in the clinical study). | 1. During training, the loss, as the difference to a ground truth, is monitored and the training step with the lowest test loss is taken as the final trained network.
- Automated unit-tests are set up to test the consistency of the generated output to a previously defined reference output.
- During verification, the performance of the network is tested on a phantom against the ground truth with a maximal allowed NRMSE of 11% (11% for the 2D network and 8.7% for the 3D network were achieved).
- The trained final network was used in the clinical study.
The tests show that Deep RxE increases image homogeneity in a reproducible way on the receive profile. Images acquired with Deep RxE (DL bias field correction) are rated better for image quality than the ones acquired without it in the clinical study that was conducted. |
Note on Acceptance Criteria: The document directly states acceptance criteria for Deep RxE (e.g., NRMSE
Ask a specific question about this device
(77 days)
Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)
The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.
An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.
Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)
Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc...). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.
The provided text is a 510(k) summary for Syngo Carbon Clinicals (K232856). It focuses on demonstrating substantial equivalence to a predicate device through comparison of technological characteristics and non-clinical performance testing. The document does not describe acceptance criteria for specific device performance metrics or a study that definitively proves the device meets those criteria through clinical trials or quantitative bench testing with specific reported performance values.
Instead, it relies heavily on evaluating the fit-for-use of algorithms (Lesion Quantification and Organ Segmentation) that were previously studied and cleared as part of predicate or reference devices, and ensuring their integration into the new device without modification to the core algorithms. The non-clinical performance testing for Syngo Carbon Clinicals focuses on verification and validation of changes/integrations, and conformance to relevant standards.
Therefore, many of the requested details about acceptance criteria and reported device performance cannot be extracted directly from this document. However, I can provide information based on what is available.
Acceptance Criteria and Study for Syngo Carbon Clinicals
Based on the provided 510(k) summary, formal acceptance criteria with specific reported performance metrics for the Syngo Carbon Clinicals device itself are not explicitly detailed in a comparative table against a clinical study's results. The submission primarily relies on the equivalency of its components to previously cleared devices and non-clinical verification and validation.
The "study" proving the device meets acceptance criteria is fundamentally a non-clinical performance testing, verification, and validation process, along with an evaluation of previously cleared algorithms from predicate/reference devices for "fit for use" in the subject device.
Here's a breakdown of the requested information based on the document:
1. Table of Acceptance Criteria and Reported Device Performance
As mentioned, a direct table of specific numerical acceptance criteria and a corresponding reported device performance from a clinical study is not present. The document describes acceptance in terms of:
Feature/Algorithm | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Lesion Quantification Algorithm | "Fit for use" in the subject device, with design mitigations for drawbacks/limitations identified in previous studies of the predicate device. | "The results of phantom and reader studies conducted on the Lesion Quantification Algorithm, in the predicate device, were evaluated for fit for use in the subject device and it was concluded that the Algorithm can be integrated in the subject device with few design mitigations to overcome the drawbacks/limitations specified in these studies. These design mitigations were validated by non-Clinical performance testing and were found acceptable." |
(No new specific performance metrics are reported for Syngo Carbon Clinicals, but rather acceptance of the mitigations). | ||
Organ Segmentation Algorithm | "Fit for use" in the subject device without any modifications, based on previous studies of the reference device. | "The results of phantom and reader studies conducted on the Organ Segmentation Algorithm, in the reference device, were evaluated for fit for use in the subject device. And it was concluded that the Algorithm can be integrated in the subject device without any modifications." |
(No new specific performance metrics are reported for Syngo Carbon Clinicals). | ||
Overall Device Functionality | Conformance to specifications, safety, and effectiveness comparably to the predicate device. | "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (General statement, no specific performance metrics). Consistent with "Moderate Level of Concern" software. |
Software Verification & Validation | All software specifications met the acceptance criteria. | "The testing results support that all the software specifications have met the acceptance criteria." (General statement). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated for specific test sets in this document for Syngo Carbon Clinicals. The evaluations of the Lesion Quantification and Organ Segmentation algorithms refer to "phantom and reader studies" from their respective predicate/reference devices, but details on the sample sizes of those original studies are not provided here.
- Data Provenance: Not specified. The original "phantom and reader studies" for the algorithms were likely internal to the manufacturers or collaborators, but this document does not detail their origin (e.g., country, specific institutions). The text indicates these were retrospective studies (referring to prior evaluations).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Number of Experts: Not specified. The document mentions "reader studies" were conducted for the predicate/reference devices' algorithms, implying involvement of human readers/experts, but the number is not stated.
- Qualifications of Experts: Not specified. It can be inferred that these would be "trained medical professionals" as per the intended user for the device, but specific qualifications (e.g., radiologist with X years of experience) are not provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified for the historical "reader studies" referenced. This document does not detail the methodology for establishing ground truth or resolving discrepancies among readers if multiple readers were involved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: The document itself states, "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." Therefore, no MRMC comparative effectiveness study for human readers with and without AI assistance for Syngo Carbon Clinicals was performed or reported in this submission. The device is a set of advanced visualization tools, not an AI-assisted diagnostic aid that directly impacts reader performance in a comparative study mentioned here.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Performance: The core algorithms (Lesion Quantification and Organ Segmentation) were evaluated in "phantom and reader studies" as part of their previous clearances (predicate/reference devices). While specific standalone numerical performance metrics for these algorithms (e.g., sensitivity, specificity, accuracy) are not reported in this document, the mention of "phantom" studies suggests a standalone evaluation component. The current submission, however, evaluates these previously cleared algorithms for "fit for use" within the new Syngo Carbon Clinicals device, implying their standalone performance was considered acceptable from their original clearances.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not explicitly detailed. The referenced "phantom and reader studies" imply that for phantoms, the ground truth would be known (e.g., physical measurements), and for reader studies, it would likely involve expert consensus or established clinical benchmarks. However, the exact method for establishing ground truth in those original studies is not provided here.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not specified in this 510(k) summary. The document mentions that the deep learning algorithm for organ segmentation was "cleared as part of the reference device syngo.via RT Image suite (K220783)." This implies that any training data for this algorithm would have been part of the K220783 submission, not detailed here for Syngo Carbon Clinicals.
9. How the Ground Truth for the Training Set was Established
- Ground Truth for Training Set: Not specified in this 510(k) summary. As with the training set size, this information would have been part of the original K220783 submission for the organ segmentation algorithm and is not detailed in this document.
Ask a specific question about this device
(86 days)
syngo.CT Lung CAD device is a computer-aided detection (CAD) tool designed to assist radiologists in the detection of solid and subsolid pulmonary nodules during review of multi-detector computed tomography (MDCT) from multivendor examinations of the chest. The software is an adjunctive tool to alert the radiologist to regions of interest (ROI) that may be otherwise overlooked.
The syngo.CT Lung CAD device may be used as a concurrent first reader followed by a full review of the case by the radiologist or as second reader after the radiologist has completed his/her initial read.
The syngo.CT Lung CAD device may also be used in "solid-only" mode, where potential (or suspected) sub-solid and/or fully calcified CAD findings are filtered out.
The software device is an algorithm which does not have its own user interface component for displaying of CAD marks. The Hosting Application incorporating syngo. CT Lung CAD is responsible for implementing a user interface.
Siemens Healthcare GmbH intends to market the syngo.CT Lung CAD which is a medical device that is designed to perform CAD processing in thoracic CT examinations for the detection of solid pulmonary nodules (between 3.0 mm and 30.0mm) and subsolid nodules (between 5.0 mm and 30.0mm) in average diameter. The device processes images acquired with multi-detector CT scanners with 16 or more detector rows recommended.
The syngo.CT Lung CAD device supports the full range of nodule locations (central, peripheral) and contours (round, irregular).
The syngo.CT Lung CAD sends a list of nodule candidate locations to a visualization application, such as syngo MM Oncology, or a visualization rendering component, which generates output images series with the CAD marks superimposed on the input thoracic CT images to enable the radiologist's review. syngo MM Oncology (FDA clearanceK211459 and subsequent versions ) is deployed on the syngo.via platform (FDA clearance K191040 and subsequent versions), which provides a common framework for various other applications implementing specific clinical workflows (but are not part of this clearance) to display the CAD marks. The syngo.CT Lung CAD device may be used either as a concurrent first reader, followed by a review of the case, or as a second reader only after the initial read is completed
The provided text describes the Siemens syngo.CT Lung CAD (Version VD30) and its substantial equivalence to its predicate device (syngo.CT Lung CAD Version VD20). The primary change in VD30 is the introduction of a "solid-only" mode. The acceptance criteria and study details are primarily focused on demonstrating that the VD30 in "solid-only" mode is not inferior to VD20 in standard mode, and that VD30 in standard mode is not inferior to VD20 in standard mode. Since the document primarily focuses on demonstrating non-inferiority to a predicate device, explicit acceptance criteria values (e.g., minimum sensitivity thresholds) are not explicitly stated as numerical targets. Instead, the acceptance is based on statistical non-inferiority.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Implied for Non-inferiority) | Reported Device Performance (Summary) |
---|---|
For VD30 (solid-only mode) vs. VD20 (standard mode): | |
- Sensitivity of VD30 in solid-only mode is not inferior to VD20 in standard mode. | The standalone accuracy has shown that the sensitivity of VD30 in solid-only mode is not inferior to VD20 in standard mode. |
- Mean number of false positives (FPs) per subject is significantly lower with VD30 in solid-only mode. | The mean number of false positives per subject is significantly lower with VD30 in solid-only mode. |
- The 2 CAD systems overlap in True Positives (TPs) and FPs. | (Implied as part of showing non-inferiority and lower FPs). |
For VD30 (standard mode) vs. VD20 (standard mode): | |
- Sensitivity of VD30 in standard mode is not inferior to VD20 in standard mode. | The sensitivity of VD30 in standard mode is not inferior to VD20 in standard mode. |
- Mean number of FPs per subject of VD30 in standard mode is not inferior to VD20 in standard mode. | The mean number of FPs per subject of VD30 in standard mode is not inferior to VD20 in standard mode. |
2. Sample size used for the test set and the data provenance
- Sample Size: 712 CT thoracic cases.
- Data Provenance: Retrospectively collected data from 3 sources:
- The UCLA study (232 cases)
- The original PMA study (145 cases)
- Additional cases (335 cases)
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document differentiates ground truth establishment based on the data source:
- UCLA data: Reference standard (ground truth) was determined as part of the reader study for the predicate device (K203258). The number and qualifications of experts are not explicitly stated for this subset in the provided text for VD30, but it refers to the predicate clearance.
- PMA study cases: 18 readers were used. Qualifications are not explicitly stated, but 9 of the 18 readers were needed for declaring a true nodule.
- Additional cases: 7 readers were used. Qualifications are not explicitly stated, but 4 of the 7 readers were needed for declaring a true nodule.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The adjudication method varied based on the data source:
- PMA study cases: 9 out of 18 readers were needed for declaring a true nodule. This suggests a majority consensus from a large panel.
- Additional cases: 4 out of 7 readers were needed for declaring a true nodule. This also suggests a majority consensus.
- UCLA data: "Reference standard for the UCLA data was determined as part of the reader study (K203258)." Specific adjudication details for this subset are not provided in this document but are referenced to the predicate device's clearance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A MRMC comparative effectiveness study involving human readers with and without AI assistance is not explicitly described in this document. The statistical analysis performed was a standalone performance analysis to demonstrate substantial equivalence between two CAD versions (VD30 vs VD20), focusing on the algorithm's performance metrics (sensitivity, FPs).
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance analysis was done. The document states: "The standalone performance analysis was designed to demonstrate the substantial equivalence between syngo.CT Lung CAD VD30A (VD30) and the predicate device syngo.CT Lung CAD VD20."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth was established through expert consensus/reader review.
- For PMA cases: 9 out of 18 readers' consensus.
- For additional cases: 4 out of 7 readers' consensus.
- For UCLA data: Reference standard from a reader study.
8. The sample size for the training set
The document does not explicitly state the sample size for the training set. It mentions that the algorithm is based on Convolutional Networks (CNN) and that the lung segmentation algorithm for VD30 in particular is "trained on lung CT data including comorbidities for robustness," but the specific number of cases for this training set is not provided.
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It only states that the lung segmentation algorithm was "trained on lung CT data" and that the overall algorithm uses CNNs, implying supervised learning, which would require ground truth annotations. However, the method of obtaining these annotations is not detailed.
Ask a specific question about this device
(21 days)
Syngo Carbon Space is a software intended to display medical data and to support the review and analysis of medical images by trained medical professionals.
Syngo Carbon Space "Diagnostic Workspace" is indicated for display, rendering, post-processing of medical data (mostly medical images) within healthcare institutions, for example, in the field of Radiology, Nuclear Medicine and Cardiology.
Syngo Carbon Space "Physician Access" is indicated for display and rendering of medical data within healthcare institutions.
Syngo Carbon Space is a software only medical device which is intended to be installed on recommended common IT Hardware. The hardware is not seen as part of the medical device. Syngo Carbon Space is intended to support reviews and analysis of medical images by trained medical practitioners. The software is used in Radiology for reading images and throughout the healthcare institutions for image & result distribution.
Syngo Carbon Space is a medical device, provided in two variants/options.
- Diagnostic Workspace (Fat/Thick Client) -
- -Physician Access (Thin/Web Client)
In any scenario, both the options can be installed/run on the same machine and be used simultaneously.
Syngo Carbon Space Diagnostic Workspace provides a reading workspace for Radiology that supports display of medical image data & documents and connects intelligent work tools (diagnostic and non-diagnostic software elements) to enable easy access to the data needed, easy access to external tools and creation of actionable results.
Syngo Carbon Space Physician Access provides a zero-footprint web application for enterprise-wide viewing of DICOM, non-DICOM, multimedia data and clinical documents to facilitate image and result distribution in the healthcare institution.
The provided text is a 510(k) summary for the Syngo Carbon Space (VA30A) device, a medical image management and processing system. The core of this submission is to demonstrate substantial equivalence to a predicate device (Syngo Carbon Space VA20A).
However, the provided text does not contain the detailed clinical or non-clinical performance test results with acceptance criteria and reported performance values that would typically be presented in a table. It states that "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." and "There are no changes to the algorithm and its performance that requires a new bench testing for the subject device. The results/summary from the predicate device is still applicable for the subject device."
Therefore, I cannot populate the table of acceptance criteria and reported device performance from the provided text directly. The document focuses on demonstrating that the new version of the software (VA30A) maintains the same safety and effectiveness as the previous version (VA20A) by highlighting identical intended use, indications for use, contraindications, and core functionalities. It also details minor enhancements to existing features (like measurement tools and patient jacket functionality) and updates to supported operating systems and browsers, which were validated through non-clinical performance testing.
Here's a breakdown of the requested information based on the provided text, with explicit notes on what is NOT available:
1. A table of acceptance criteria and the reported device performance
This information is NOT explicitly detailed in the provided document. The document states that "all performance testing was conducted in a non-clinical fashion as part of verification and validation activities" and "The testing results support that all the software specifications have met the acceptance criteria." However, it does not enumerate specific acceptance criteria or the quantitative results of these non-clinical tests for the VA30A or its predicate beyond a high-level statement of conformance.
The document emphasizes that there are "no changes to the algorithm and its performance that requires a new bench testing for the subject device. The results/summary from the predicate device is still applicable for the subject device." This implies that the performance demonstrated by the predicate device (VA20A) is considered valid for the subject device (VA30A), but the specific performance metrics and their acceptance criteria are not provided in this summary.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: NOT specified. The document mentions "non-clinical performance testing" and "verification and validation activities" but does not provide details on the sample size of data or tests used for these validations.
- Data Provenance: NOT specified. Given that no clinical studies were performed, the data would have been synthetic or from internal testing environments. The origin (country) or nature (retrospective/prospective) of any test data is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- NOT applicable/specified. Since no clinical studies were performed and the testing was non-clinical and focused on software verification and validation, there is no mention of experts establishing ground truth for a clinical test set. The validation would have been against pre-defined software requirements or simulated scenarios.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- NOT applicable/specified. As no clinical studies or reader studies are reported, there's no mention of an adjudication method for ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was NOT done. The document explicitly states: "No clinical studies were carried out for the product". The device is a "Medical Image Management and Processing System" and doesn't appear to include AI-assisted diagnostic capabilities (the text mentions "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel.").
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, standalone non-clinical testing was performed for software verification. While not described as an "algorithm only" performance study in the context of diagnostic accuracy, the entire submission is based on "non-clinical performance testing" and "verification and validation activities" to ensure the software functions as intended and meets specifications. The document states: "Performance tests were conducted to test the functionality of the device Syngo Carbon Space. These tests have been performed to assess the functionality of the subject device."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- NOT applicable/specified in the context of clinical ground truth. For non-clinical software testing, ground truth would be established by specified software requirements, functional correctness, performance benchmarks (e.g., speed, data integrity, display accuracy against known inputs), and adherence to standards (DICOM, HL7). The document doesn't detail how specific "ground truths" were established for these technical tests.
8. The sample size for the training set
- NOT applicable/specified. As this is not a submission for an AI/ML algorithm that requires a training set for model development, there is no mention of a training set size. The device is a software system for image management and processing.
9. How the ground truth for the training set was established
- NOT applicable/specified. (See point 8)
Ask a specific question about this device
(70 days)
MAMMOVISTA B.smart is a dedicated softcopy review environment for both screening and diagnostic mammography as well as digital breast tomosynthesis. Its user interface and workflow have been optimized to support experienced mammography and tomosynthesis reviewers in both screening and diagnostic reading. Efficiency and reading quality are supported by various specialized features. MAMMOVISTA B.smart provides visualization and image enhancement tools to aid a qualified radiologist in the review of digital mammography and digital breast tomosynthesis datasets, as well as other modalities of breast images.
MAMMOVISTA B.smart is an optional software application for the Siemens Healthineers syngo.via platform (K191040). MAMMOVISTA B.smart is an image viewing and processing software environment dedicated to breast image display. It is designed to provide the performance required for the high data volume of digital tomosynthesis and the display of multi-modality breast images, such as those from MRI and ultrasound. Individual workflows can be adapted to either screening or diaqnostic purposes.
MAMMOVISTA B.smart runs on a PC and can be used for Mammography image review together with monitors cleared for Mammography diagnostics. The software solution provides for the display of DICOM compatible information, such as breast density and CAD (Computer Aided Diagnostics) markers.
The provided text describes MAMMOVISTA B.smart (VB70), a software device for mammography image review. However, it does not explicitly state acceptance criteria or a dedicated study proving performance against such criteria. The submission is a 510(k) premarket notification for substantial equivalence, comparing the new VB70 version to a predicate device, MAMMOVISTA B.smart VB60 (K212621).
Here's an analysis based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly present a table of quantitative acceptance criteria or device performance metrics for the VB70 version beyond a feature-by-feature comparison to the predicate device. The performance is described in terms of functional equivalence and safety.
Feature/Criterion | Acceptance Criteria (Implied) | Reported Device Performance (VB70) |
---|---|---|
Functional Equivalence | Functions identically to predicate device (VB60). | "MAMMOVISTA B.smart VB70 has the same indications for use as the predicate device. ...The new software design was completed in accordance with Quality Management System Design Controls comparable to the processes available for the predicate device. The scope of internationally recognized standards compliance is the same as it was for the predicate device." "Verification and validation testing demonstrate that the MAMMOVISTA B.smart performs as intended." |
Safety | No new safety risks compared to predicate device. | "It is Siemens' opinion that the MAMMOVISTA B.smart does not introduce any new potential safety risks and is substantially equivalent to the MAMMOVISTA B.smart VB60." Risk analysis completed and controls implemented. |
Compliance with Standards | Conforms to relevant software and medical device standards. | Complies with IEC 62366-1 2015 Ed 1.0, IEC 62304 2015, Ed.1.1, and NEMA PS 3.1 - 3.20 2016. |
DICOM Compatibility | Compatible with DICOM 3.0 and various modalities. | Same as predicate, supports MG, MG Tomo, MR, CR, CT, DR, NM, US, SC, PET. |
Display of CAD Markers | Ability to display third-party CAD markers. | Yes, same as predicate. |
Display/Processing of DBT | Ability to display and process Digital Breast Tomosynthesis images. | Yes, same as predicate. |
Display of Breast Density | Ability to display breast density values. | Yes, same as predicate. |
Configuration/Settings | Workflow, layout, image viewing, and tool settings function as intended. Minimal impact on safety/effectiveness for new settings. | Includes automatic study grouping, diagnostic display responsibility, client compatibility check, image rendering performance, layout settings, ReportFlow settings, custom image text settings, image navigation settings, image viewing preferences, image tool settings, workflow settings, screening case detection, double blind reading. "The new settings do not impact safety and effectiveness." |
MR Support | MR Layouts and functionality (e.g., color overlay, time curve analyzer) function as intended. Minimal impact on safety/effectiveness for new MR features. | New MR layouts (MR.Kaiser, MR.MPR, MR.DWI, MR.FollowUp), color overlay, time curve analyzer. "The new MR features do not impact safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance:
The document states: "Non-clinical tests (integration and functional) were conducted on the MAMMOVISTA B.smart during product development." It further notes, "Siemens did not conduct any clinical tests for the subject device." Therefore, the "test set" in this context refers to software testing and verification/validation, not a clinical data set for performance evaluation in a medical context.
- Sample Size for Test Set: Not specified, as it refers to internal software testing, not a clinical population.
- Data Provenance: Not applicable for a software-only 510(k) submission based on substantial equivalence and non-clinical testing. No patient data or clinical images are mentioned as being part of a "test set" for performance evaluation in the clinical sense.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
Not applicable. This was a non-clinical software verification and validation study, not a clinical performance study requiring expert ground truth for medical diagnoses.
4. Adjudication Method for the Test Set:
Not applicable. This was a non-clinical software verification and validation study, not a clinical performance study involving human adjudication of medical findings.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No. The document explicitly states: "Siemens did not conduct any clinical tests for the subject device." The device is a "softcopy review environment" that provides "visualization and image enhancement tools to aid a qualified radiologist," meaning it's a viewing workstation, not an AI or CAD device that provides interpretations or assists directly with diagnostic accuracy in a quantifiable way like an AI algorithm.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:
No. The device is a viewing and processing software. It is not an algorithm designed to provide standalone diagnostic interpretations. Its purpose is to "aid a qualified radiologist in the review" of images.
7. The Type of Ground Truth Used:
Not applicable. As a software viewing platform, the concept of "ground truth" (pathology, expert consensus, outcomes data) for its own performance is not directly relevant. Its performance is related to its ability to display images correctly, adhere to DICOM standards, and provide tools as specified, which are verified through non-clinical software testing.
8. The Sample Size for the Training Set:
Not applicable. This device is a software viewing platform, not an AI/Machine Learning algorithm that requires a "training set" of data.
9. How the Ground Truth for the Training Set was Established:
Not applicable, as there is no training set mentioned or implied for this type of device.
Ask a specific question about this device
(144 days)
syngo.via View&GO is indicated for image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology.
syngo.via View&GO is a software-only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The provided text is a 510(k) Summary for the Siemens Healthcare GmbH device "syngo.via View&GO" (Version VA30A). This document focuses on demonstrating substantial equivalence to a predicate device (syngo.via View&GO, Version VA20A) rather than presenting a detailed study of the device's performance against specific acceptance criteria for a novel algorithm.
The document states that the software is a Medical Image Management and Processing System, and its purpose is for "image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology." It specifically states, "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel."
Therefore, the provided text does not contain the information requested regarding acceptance criteria and a study proving an algorithm meets those criteria for diagnostic performance. It does not describe an AI/ML algorithm or its associated performance metrics.
However, based on the provided text, I can infer some aspects and highlight what information is missing if this were an AI-driven diagnostic device.
Here's an analysis based on the assumption that if this were an AI-based device, these fields would typically be addressed:
Summary of Device Performance (Based on provided text's limited scope for a general medical image processing system):
Since "syngo.via View&GO" is a medical image management and processing system without automated diagnostic interpretation capabilities, the acceptance criteria and performance data would revolve around its functionality, usability, and safety in handling and presenting medical images. The provided text primarily establishes substantial equivalence based on the lack of significant changes in core functionality and the adherence to relevant standards for medical software and imaging.
1. Table of acceptance criteria and the reported device performance:
The document doesn't provide a table of performance metrics for an AI algorithm. Instead, it describes "Non-clinical Performance Testing" focused on:
- Conformance to standards (DICOM, JPEG, ISO 14971, IEC 62304, IEC 82304-1, IEC 62366-1, IEEE Std 3333.2.1-2015).
- Software verification and validation (demonstrating continued conformance with special controls for medical devices containing software).
- Risk analysis and mitigation.
- Cybersecurity requirements.
- Functionality of the device (as outlined in the comparison table between subject and predicate device).
Reported Performance/Findings (General):
- "The testing results support that all the software specifications have met the acceptance criteria."
- "Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
- "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence."
- The device "does not introduce any new significant potential safety risks and is substantially equivalent to and performs as well as the predicate device."
Example of what a table might look like if this were an AI algorithm, along with why it's not present:
Acceptance Criterion (Hypothetical for AI) | Reported Device Performance (Hypothetical for AI) |
---|---|
Primary Endpoint: Sensitivity for detecting X > Y% | Not applicable - device has no diagnostic AI. |
Secondary Endpoint: Specificity for detecting X > Z% | Not applicable - device has no diagnostic AI. |
Image Rendering Accuracy (e.g., visual fidelity compared to ground truth) | "All the software specifications have met the acceptance criteria." (general statement) |
DICOM Conformance | Conforms to NEMA PS 3.1-3.20 (2016a) |
User Interface Usability (e.g., according to human factors testing) | Changes are "limited to the common look and feel based on Siemens Healthineers User Interface Style Guide." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
Feature Functionality (e.g., MPR, MIP/MinIP, VRT, measurements) | "Algorithms underwent bug-fixing and minor improvements. No re-training or change in algorithm models was performed." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
2. Sample size used for the test set and the data provenance:
- Not explicitly stated for diagnostic performance, as the device does not have automated diagnostic capabilities.
- The software verification and validation activities would involve testing with various DICOM images to ensure proper rendering and processing. The exact number of images or datasets used for these software tests is not detailed.
- Data Provenance: Not specified, as it's not a clinical performance study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not stated. Ground truth for diagnostic accuracy is not established for this device, as it does not perform automated diagnosis. The ground truth for software functionality would be the expected behavior of the software according to its specifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No clinical adjudication method is described, as this is neither a clinical study nor an AI diagnostic device.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done/described. The device explicitly states it has "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel." Therefore, it does not offer AI assistance for diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. The device is a "Medical Image Management and Processing System" that provides tools for human interpretation; it is not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable for diagnostic purposes. For software functionality, the ground truth is the defined behavior as per the software specifications and design.
8. The sample size for the training set:
- Not applicable/Not stated. The document explicitly states for the "Imaging algorithms" section that "No re-training or change in algorithm models was performed." This implies that the algorithms are traditional image processing algorithms, not machine learning models that require training data in the context of diagnostic AI. If there were any minor algorithmic adjustments, the training data for such classical algorithms is typically the mathematical formulation itself rather than a dataset of clinical cases for machine learning.
9. How the ground truth for the training set was established:
- Not applicable. As indicated above, there is no mention of "training" in the context of machine learning. The algorithms are described as undergoing "bug-fixing and minor improvements" but no "re-training or change in algorithm models."
Ask a specific question about this device
(59 days)
syngo Dynamics is a multimodality, vendor agnostic Cardiology image and information system intended for medical image management and processing that provides capabilities relating to the review and digital processing of medical images.
syngo Dynamics supports clinicians by providing post image processing functions for image manipulation, and/or quantification that are intended for use in the interpretation and analysis of medical images for disease detection, diagnosis, and/or patient management within the healthcare institution's network.
syngo Dynamics is not intended to be used for display or diagnosis of digital mammography images in the U.S.
syngo Dynamics is a software only medical device which is used with common IT hardware. Recommended configurations are defined for the hardware required to run the device, and hardware is not considered as part of the medical device.
syngo Dynamics is intended to be used by trained healthcare professionals in a professional healthcare facility to review, edit, and manipulate image data, as well as to generate quantitative data, qualitative data, and diagnostic reports.
syngo Dynamics is a digital image display and reporting system with flexible deployment - it can function as a standalone medical device that includes a DICOM Server or as an integrated module within an Electronic Health Record (EHR) System with a DICOM Archive that receives images from digital image acquisition devices such as ultrasound and x-ray angiography machines. There are three deployments: Standalone, EHR/EHS Integrated, and Multi-Modality Cardiovascular (MMCV). MMCV deployment functions as a standalone medical device with capability of natively support 2D and 3D CT and MR image types.
syngo Dynamics is based on a client-server architecture. The syngo Dynamics server processes the data from the connected imaging modalities, and stores data and images to a DICOM server and routes them for permanent storage, printing, and review. The client provides the user interface for interactive image viewing, reporting, and processing; and can be installed on network connected workstations.
syngo Dynamics offers multiple access strategies: A Workplace that provides full functionality for reading and reporting: A Remote Workplace that provides additionally compressed images with access to full fidelity images for reading and reporting; and a browser based WebViewer that provides access to additionally compressed images and reports from compatible devices (including mobile devices).
The provided document describes a 510(k) premarket notification for syngo Dynamics (Version VA40E), a medical image management and processing system. The submission aims to demonstrate substantial equivalence to a predicate device, syngo Dynamics VA30 (K171053).
Based on the document, here's a breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
It is important to note that the document does not explicitly state "acceptance criteria" in a quantitative, measurable sense for specific performance metrics (e.g., sensitivity, specificity, accuracy) because it is a substantial equivalence submission for a software update rather than a new device that needs to establish clinical efficacy from scratch. The acceptance criteria are implicitly met by demonstrating that the new version performs as well as or better than the predicate device, or that any changes do not introduce new safety or effectiveness concerns.
The performance is primarily evaluated through non-clinical testing (verification and validation) to confirm that the updated software continues to function as intended and maintains the performance of the predicate device, especially considering the added functionalities.
Feature/Attribute Tested | Acceptance/Equivalency Standard | Reported Device Performance/Conclusion |
---|---|---|
Indications for Use | Equivalent to predicate device (syngo Dynamics VA30), with minor updates not fundamentally changing the device's purpose. | The Indications for Use for syngo Dynamics (Version VA40E) are fundamentally the same as the predicate device. Updates primarily exclude functions like "storage and display" which no longer fall under FDA's medical device definition. No new specific disease or patient population indications. Shared contraindication (not for digital mammography in U.S.). |
Architecture | Same as predicate (Client-server). | Same (Client-server). |
Supported Modalities | Supported modalities should not introduce new safety/effectiveness concerns compared to predicate, and ideally enhance/extend usability without changing the intended purpose. | Subject device added support for a few DICOM modalities (CT, MR, SC, PET via Corridor4DM) compared to the predicate. These additions enhance and extend conditions for use but do not impact the purpose or actual use of the device. (Note: These are identical to those in the reference device syngo.via VB40A, K191040). |
Image Communication | Standard communication protocols, equivalent to predicate. | Equivalent. Uses TCP/IP, DICOM, HL7, HTTP(S). The predicate used HTTP, the subject uses HTTP(S). All are standard protocols. |
Image Data Compression | Lossless and lossy compression methods, equivalent to predicate. | Equivalent. Both use lossless (factor 2-3) and lossy (JPEG/MP4) compression. The only difference is the file format for lossy compression (MP4 explicitly mentioned for subject device). |
Imaging Algorithms | Equivalent or improved without introducing new safety/effectiveness concerns. | Subject device uses the same imaging algorithms as the predicate, with additional four algorithms (Multiplanar reconstruction (MPR), Maximum and Minimum Intensity Projection (MIP/MinIP), Volume Rendering Technique (VRT)). These additions are identical to those in the reference device syngo.via VB40A. |
Quantitative Algorithms | Equivalent or improved without introducing new safety/effectiveness concerns. | Subject device added three quantitative algorithms (Distance line, Angle, Volume) from the predicate device (which only had Pixel Size Evaluation). These additions are identical to those in the reference device syngo.via VB40A. |
Decision Support | Same as predicate (ability to interface with third-party rules engine). | Same with the predicate device. |
Reporting | Equivalent to predicate's reporting capabilities. | Equivalent. Both offer customizable DICOM Structured Reporting and Collaborative reporting. "Web reporting" in predicate was replaced with "Remote reporting" via Remote Workplace in the subject device, which is considered equivalent. |
Access Strategies | Similar to predicate's access methods. | Similar. "Portal Image Review" in predicate replaced by "Remote Workplace" and "WebViewer" in subject. |
Mobile Device Support | Equivalent web-client for non-diagnostic read-only access. | Equivalent. Both use web client for read-only access (WebViewer for subject, Common Login/Portal Image Review for predicate) on mobile devices for non-diagnostic use. |
Long Term Archive | Same archiving capabilities. | Same. Both provide long-term archive and retrieval of DICOM studies to/from VNA or HSM systems. |
Hardware | Software-only option for server/workstation, with recommended requirements. | Same. Software-only. Hardware not part of medical device, but must meet recommended requirements. |
Virtualization | Same virtualization capabilities. | Same. Provides virtualization of server and client machines. |
Operating System | Updated OS versions without introducing new safety/effectiveness concerns. | Equivalent. Subject device uses updated versions of Microsoft Windows Server and Windows 10 client OS compared to older versions in the predicate. |
Deployment Strategy | Similar to predicate, potentially with added configurations that are equivalent to reference device. | First two deployments (standalone and EHR/EHS Integrated) are equivalent. The third deployment (Multi-modality cardiovascular) uses technology identical to the reference device syngo.via VB40A (K191040), particularly for CT/MR viewing, ensuring equivalency. |
Conformance to Standards | Adherence to recognized consensus standards. | Claims conformance to NEMA PS 3.1-3.20 (2016), ISO IEC 10918-1 (1994), IEC 62366-1 (2020), ISO 14971 (2019), IEEE Std 3333.2.1-2015, IEC 62304 (2015), IEC TR 80001-2-2 (2012), IEC 82304-1 (2016). |
Software Verification & Validation | Conformance to FDA guidance "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (Moderate Level of Concern) and cybersecurity requirements. | Documentation included for software of a Moderate Level of Concern. Non-clinical testing conducted. Evidence demonstrates conformance with special controls. Cybersecurity considerations addressed to prevent unauthorized access, modification, misuse, etc. Risk Analysis (ISO 14971) completed, risk control implemented, and testing results support that all software specifications met acceptance criteria. Testing for verification and validation was found acceptable in support of determining similarities to the predicate/previously cleared device. |
Overall Safety and Effectiveness | Safe and effective as the predicate, introducing no new safety or effectiveness concerns. | The device is safe, effective, and performs as well as the predicate device. It does not introduce any new significant potential safety risks and is similar to the predicate device. The output is evaluated by clinicians, providing for sufficient review to identify and intervene in case of malfunction. |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size: Not explicitly mentioned in terms of number of cases or studies. The submission states that "All performance testing was conducted in a non-clinical fashion as part of the verification and validation activities for the medical device." This implies functional testing, integration testing, and performance testing against defined specifications rather than testing with a "test set" of patient data in a clinical trial context.
- Data Provenance: Not applicable, as no clinical studies with patient data were conducted. The testing involved non-clinical verification and validation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- This information is not provided because no clinical studies were conducted, and therefore, no "ground truth" was established by experts for a test set of clinical cases. The device is for image management and processing, and its performance is assessed through technical verification and validation, ensuring it functions correctly and aligns with the predicate device.
4. Adjudication Method for the Test Set
- Not applicable as no clinical test set requiring expert adjudication was used.
5. If a Multi Reader Multi Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The device, syngo Dynamics (Version VA40E), is a medical image management and processing system, not an AI-assisted diagnostic tool designed to directly improve human reader performance in interpreting images. It provides functionalities for image manipulation, quantification, review, and reporting.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that standalone software verification and validation were performed for the syngo Dynamics (Version VA40E) software, as it is a "software only medical device." The tests assessed the functionality of the device itself (e.g., image processing algorithms, communication protocols, reporting features) to ensure it meets specifications, which is a form of standalone performance assessment. However, it's not a standalone diagnostic performance study in the sense of a deep learning algorithm detecting disease.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- Again, this is not explicitly stated as no clinical studies establishing diagnostic performance with a ground truth were conducted. For non-clinical software testing, the "ground truth" would likely be the expected output or behavior validated against established functional requirements, design specifications, and relevant industry standards (e.g., DICOM standard conformance for image handling).
8. The Sample Size for the Training Set
- Not applicable. The device is a software system for image management and processing, not a machine learning model that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established
- Not applicable, as there is no training set for this type of software.
Ask a specific question about this device
(86 days)
MAMMOVISTA B.smart is a dedicated softcopy review environment for both screening and diagnostic mammography as well as digital breast tomosynthesis. Its user interface and workflow have been optimized to support experienced mammography and tomosynthesis reviewers in both screening and diagnostic reading. Efficiency and reading quality are supported by various specialized features. MAMMOVISTA B.smart provides visualization and image enhancement tools to aid a qualified radiologist in the review of digital mammography and digital breast tomosynthesis datasets, as well as other modalities of breast images.
MAMMOVISTA B.smart is an optional software application for the Siemens Healthineers syngo.via platform (K191040). MAMMOVISTA B.smart is an image viewing and processing software environment dedicated to breast image display. It is designed to provide the performance required for the high data volume of digital tomosynthesis and the display of multi-modality breast images, such as those from MRI and ultrasound. Individual workflows can be adapted for either screening or diagnostic purposes.
MAMMOVISTA B.smart runs on a PC and can be used for Mammography image review together with monitors cleared for Mammography diagnostics. The software solution provides for the display of DICOM compatible information, such as breast density and CAD (Computer Aided Diagnostics) markers.
I am sorry, but the provided text does not contain the detailed information needed to answer your request about the acceptance criteria and the study that proves the device meets them. The document is a 510(k) summary for a medical device (MAMMOVISTA B.smart) and primarily focuses on establishing substantial equivalence to a predicate device.
Specifically, the document states:
- "Non-clinical tests (integration and functional) were conducted on the MAMMOVISTA B.smart during product development. The risk analysis was completed and risk controls were implemented to mitigate identified hazards. The test results support that all the software specifications have met the acceptance criteria. Verification and validation testing were found acceptable to support the claim of substantial equivalence." (page 7)
- "Siemens did not conduct any clinical tests for the subject device." (page 7)
Therefore, I cannot provide:
- A table of acceptance criteria and reported device performance.
- Sample size and data provenance for a test set.
- Number and qualifications of experts for ground truth.
- Adjudication method for a test set.
- Information on a multi-reader multi-case (MRMC) comparative effectiveness study or human reader improvement with AI.
- Information on a standalone performance study, as clinical tests were not conducted.
- Type of ground truth used.
- Sample size for the training set.
- How ground truth for the training set was established.
The document only states that "all the software specifications have met the acceptance criteria" based on non-clinical (integration and functional) tests and risk analysis, but it does not detail those criteria or the specific results of these tests. It explicitly states no clinical tests were performed.
Ask a specific question about this device
Page 1 of 2