Search Results
Found 1 results
510(k) Data Aggregation
(237 days)
syngo.via RT Image Suite VC10 is a 3D and 4D image visualization, multimodality manipulation and contouring tool that helps the preparation of treatments such as, but not limited to those performed with radiation (for example, Brachytherapy, Particle Therapy, External Beam Radiation Therapy).
It provides tools to view existing contours, create, edit, modify, copy contours of regions of the body, such as but not limited to, skin outline, targets and organs-at-risk. It also provides functionalities to create simple geometric treatment plans. Contours, images and treatment plans can subsequently be exported to a Treatment Planning System.
The software combines the following digital image processing and visualization tools (not all of them might be available at each customer):
- Multimodality viewing and contouring of anatomical, functional, and multi parametric images such as but not limited to CT, PET, PET/CT, MRI, Linac CBCT images
- Multiplanar reconstruction (MPR) thin/thick, minimum intensity projection (MIP), volume rendering technique (VRT)
- Freehand and semi-automatic contouring of regions-of-interest on any orientation including oblique
- Automated Contouring on CT and MR images, including known (diagnosed) brain metastases
- Creation of contours on images supported by the application without prior assignment of a planning CT or planning MR
- Manual and semi-automatic registration using rigid and deformable registration
- Supports the user in comparing, contouring, and adapting contours based on datasets acquired with different imaging modalities and at different time points
- Supports multi-modality image fusion
- Visualization and contouring of moving tumors and organs
- Management of points of interest including but not limited to the isocenter
- Creation of simple geometric treatment plans
- Generation of a synthetic CT based on pre-defined MR acquisitions
syngo.via RT Image Suite VC10 is an image analysis and radiation therapy preparation software that provides multimodality image viewing, registration, segmentation, synthetic CT generation, and patient marking workflows. Within the Medical device syngo.via RT Image Suite VC10 the name 'CT Sim&Go' is used. CT Sim&GO is the name used for syngo.via RT Image Suite VC10 when it is deployed in the CT scanner workflow. CT Sim&GO is not a device of its own. CT Sim&GO is a subset of syngo.via RT Image Suite VC10 functionalities that can be accessed from the CT scanner workplace. More precisely it comprises the Patient Marking and Beam Placement modules of syngo.via RT Image Suite VC10. The current submission includes modifications affecting the following functionalities:
MR Autocontouring: The AI‑based MR autocontouring functionality has been introduced to include segmentation of previously diagnosed brain metastases, in addition to MR‑based segmentation of brain OARs and male pelvis OARs. The metastasis‑specific model is restricted to identification of metastases already diagnosed by a clinician, and the software does not provide diagnostic capability or detect new or unknown lesions. These contours remain fully editable by the user.
CT Autocontouring: syngo.via RT Image Suite VC10 includes 29 new organs and structures for deep‑learning–based CT autocontouring. The underlying DL architecture is unchanged; however, additional segmentation guidelines and organ‑coverage expansion were incorporated.
Improvements to Isocenter Definition & Patient Marking: The patient‑marking workflow includes semi‑automated isocenter estimation for breast (originally cleared under K192065) and vertebral regions (originally cleared under K220783). New capability to send the coordinates from a movable laser positioning system to the syngo.via RT Image Suite VC10, enabling a workflow of setting isocenter directly on the skin and transferring it to the planning images.
Synthetic CT: The synthetic CT feature was updated to include a new 3D deep‑learning–based algorithm for brain and pelvis, replacing the prior 2D model and improving HU accuracy and geometric fidelity.
Here's a breakdown of the acceptance criteria and study details for the syngo.via RT Image Suite VC10 device, based on the provided 510(k) summary:
Acceptance Criteria and Device Performance for syngo.via RT Image Suite VC10
1. Table of Acceptance Criteria and Reported Device Performance
The device includes several AI-based auto-contouring functionalities and a synthetic CT feature. The acceptance criteria and reported performance vary by feature.
1.1 CT Auto-Contouring (Existing Organs)
| Structure Group | Acceptance Criterion (DICE Score) | Reported Device Performance (Mean DICE) |
|---|---|---|
| Head and Neck | Statistical non-inferiority of the Dice score compared with the reference predicate (lower 95th percentile confidence bound > mean reference performance - 10% margin). | 0.37 (Optic Chiasm) - 0.99 (Body) |
| Thorax and Abdomen | Statistical non-inferiority of the Dice score compared with the reference predicate (lower 95th percentile confidence bound > mean reference performance - 10% margin). | 0.42 (LAD) - 0.96 (Liver) |
| Pelvis | Statistical non-inferiority of the Dice score compared with the reference predicate (lower 95th percentile confidence bound > mean reference performance - 10% margin). | 0.68 (LN Presacral) - 0.95 (Bladder, Proximal Femur Left) |
1.2 CT Auto-Contouring (New Organs)
| Structure Group | Acceptance Criteria | Reported Device Performance (Mean DICE / Mean ASSD) |
| :------------------------ | :------------------------------- |
| New Head and Neck OARs | 1. Statistical non-inferiority of the Dice score compared with a reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with a reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.68 (Lacrimal Gland Right) - 0.94 (Humeral Head Right) ASSD (mm): 0.7 (Humeral Head Left) - 1.15 (Lacrimal Gland Right) |
| New Thorax OARs | 1. Statistical non-inferiority of the Dice score compared with a reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with a reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.28 (CA Left Circumflex) - 0.75 (N2 Station 3A) ASSD (mm): 1.19 (N1 Station 10 Left) - 5.17 (N2 Station 9 Left) |
| New Pelvis OARs | 1. Statistical non-inferiority of the Dice score compared with a reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with a reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.92 (Sacrum) - 0.95 (Femoral Head Left, Hip Bone Right) ASSD (mm): 0.4 (Hip Bone Right) - 2.92 (Bowel Bag) |
1.3 CT Auto-Contouring (Subgroup Analysis for Photon-Counting CT)
| Body Region | Acceptance Criterion (Mean DICE) | Reported Device Performance (Mean DICE) |
|---|---|---|
| Image Contrast | Mean Dice >= reference mean Dice (between reference and variable image impression). | 0.90 (Head and Neck) - 0.97 (Abdomen, Body) |
| Image Resolution | Mean Dice >= reference mean Dice (between reference and variable image impression). | 0.93 (Head and Neck) - 0.99 (Abdomen, Body) |
1.4 MR Auto-Contouring (Brain Metastasis)
| Metric | Acceptance Criterion | Reported Device Performance (Mean) |
|---|---|---|
| Lesionwise DICE | Statistical non-inferiority compared with the reference device (lower 95th percentile confidence bound > mean reference dice coefficient - 10% margin). | 0.74 |
| Lesionwise Sensitivity | Statistical non-inferiority compared with the reference device (lower 95th percentile confidence bound > mean reference sensitivity - 10% margin). | 92.5% |
1.5 MR Auto-Contouring (Brain and Pelvis OARs)
| Structure Group | Acceptance Criteria | Reported Device Performance (Mean DICE / Mean ASSD) |
|---|---|---|
| Brain OARs | 1. Statistical non-inferiority of the DICE score compared with the reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with the reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.49 (Cochlea Right) - 0.93 (Brainstem) ASSD (mm): 0.36 (Eye Right) - 1.95 (Lacrimal Gland Left) |
| Pelvis OARs (T1) | 1. Statistical non-inferiority of the DICE score compared with the reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with the reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.93 (Femur head Left) - 0.98 (Body) ASSD (mm): 0.96 (Femur head Right) - 1.82 (Body) |
| Pelvis OARs (T2) | 1. Statistical non-inferiority of the DICE score compared with the reference device (lower 95th percentile confidence bound > mean reference performance - 10% margin). 2. Statistical non-inferiority of the ASSD score compared with the reference device (upper 95th percentile confidence bound < mean reference performance + Std.Dev). 3. Average user evaluation of 3 or higher (on a 4-point scale for time savings). | DICE: 0.72 (Seminal Vesicles) - 0.9 (Bladder) ASSD (mm): 0.91 (Penile Bulb) - 2.47 (Anus) |
1.6 Synthetic CT (Pelvis and Brain)
Acceptance criteria for Geometric Fidelity and HU Accuracy are mentioned but not specifically detailed in the provided text. The study states "The testing ensures the quantitative performance of the resulting synthetic CT. Analysis was performed on geometric fidelity and HU accuracy."
1.7 Semi-automated isocenter estimates of vertebrae
Acceptance criterion is "percentage of cases which need corrections" but the specific threshold is not detailed. The study mentions "Analysis was performed on percentage of cases which need corrections." and that it was cleared under K220450.
2. Sample Size Used for the Test Set and Data Provenance
| Feature/Model | Number of Subjects (Test Set) | Data Provenance (Country of Origin, Retrospective/Prospective) |
|---|---|---|
| CT Auto-Contouring | 469 | Europe (IT, PT, CH, UK, NL, DE), North America (US, CA), South America (BR), Australia, Asia (JP, IN); Retrospective (implied, as it's from existing data) |
| MR Brain Metastasis Model | 30 | USA, EU; Retrospective (implied) |
| MR Brain OAR Model | 81 (# Test data sets) | USA, EU; Retrospective (implied) |
| MR Pelvis OAR Model | 153 | North America (US), Europe (Germany, Romania, France, Switzerland, Spain), Australia; Retrospective (implied) |
| Synthetic CT Model | 51 | US, Europe; Retrospective (implied) |
| Semi-automated isocenter estimation/laser-based (breast) | 10 | Not specified beyond general statement: "All test datasets were independent of training datasets." |
| Semi-automated isocenter estimation/laser-based (vertebra) | 10 | Not specified beyond general statement: "All test datasets were independent of training datasets." |
| Subgroup analysis for photon-counting CT | 199 oncological patient cases | Not explicitly stated but mentions "various CT image contrasts... and image resolutions" and "oncological patient cases with and without metal implants" |
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Not explicitly stated as a single number. The document mentions "an expert team" for most ground truth annotations.
- Qualifications: "expert clinicians using well-established international contouring guidelines" and "rigorous independent quality assurance reviews." For spine landmarks, "an expert team" was used. Specific specialties (e.g., radiologist, radiation oncologist) or years of experience are not provided.
4. Adjudication Method for the Test Set
The document consistently states that ground truth contours created by expert teams underwent "rigorous independent quality assessment" or "rigorous independent quality assurance reviews." This implies an adjudication process where an independent expert or team reviewed and likely corrected or confirmed the initial expert annotations. However, the specific method (e.g., 2+1, 3+1, etc.) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size
- MRMC Study: No, a formal MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not reported in this summary.
- The user evaluation for new organs in auto-contouring (CT and MR OARs) included a "four-point scale to evaluate each contour in the context of time savings compared to contouring from scratch," which assesses user perception of usefulness but is not an MRMC comparative effectiveness study.
- Effect Size: Therefore, no effect size for human reader improvement with AI assistance is provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, extensive standalone algorithm performance testing was done for all AI-based features. The metrics reported (DICE, ASSD, Hausdorff Distance, Sensitivity, False Positive Rate, HU accuracy, geometric fidelity) are all measures of the algorithm's performance without human intervention after the contour is generated. The contours are stated to be "fully editable by the user," but the reported metrics reflect the raw output of the algorithms.
7. The Type of Ground Truth Used
- CT auto-contouring, MR brain metastasis, MR brain OAR, MR pelvis OAR: "Manual ground-truth segmentations were annotated by an expert team based on well accepted international contouring guidelines, followed by a rigorous independent quality assessment." This indicates expert consensus (or at least expert-generated and quality-assured) ground truth.
- Synthetic CT: Ground truth would likely be the actual (e.g., diagnostic-quality) CT images used for comparison, with analysis performed on "geometric fidelity and HU accuracy."
- Semi-automated isocenter estimates of vertebrae: "Manual ground-truth of spine landmarks were annotated by an expert team, followed by a rigorous independent quality assessment." This is also expert consensus.
8. The Sample Size for the Training Set
The document mentions "Training datasets consisted of curated, multicenter CT and MR image collections with expert-annotated reference standards and standardized preprocessing." However, the exact sample size for the training set is not explicitly provided in this summary for any of the features. It only states that "All test datasets were independent of training datasets."
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established similarly to the test set: "curated, multicenter CT and MR image collections with expert-annotated reference standards and standardized preprocessing." This implies expert annotation, consistent with how the test set ground truth was created.
Ask a specific question about this device
Page 1 of 1