Search Results
Found 3 results
510(k) Data Aggregation
(198 days)
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR predefined structures using deep-leaming-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The output of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions.
AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.
CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOM-RTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the TPS, which is the standard location for the planning of radiation therapy).
The output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals. The qualified medical professional can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for AI-Rad Companion Organs RT:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria and reported performance are detailed for both MR and CT contouring algorithms.
MR Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
MR Contouring Organs | The average segmentation accuracy (Dice value) of all subject device organs should be equivalent or better than the overall segmentation accuracy of the predicate device. The overall fail rate for each organ/anatomical structure is smaller than 15%. | Dice [%]: 85.75% (95% CI: [82.85, 87.58]) |
ASSD [mm]: 1.25 (95% CI: [0.95, 2.02]) | ||
Fail [%]: 2.75% | ||
(Compared to Reference Device MRCAT Pelvis (K182888)) | AI-Rad Companion Organs RT VA50 – all organs: 86% (83-88) | |
AI-Rad Companion Organs RT VA50 – common organs: 82% (78-84) | ||
MRCAT Pelvis (K182888) – all organs: 77% (75-79) |
CT Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
Organs in Predicate Device | All the organs segmented in the predicate device are also segmented in the subject device. The average (AVG) Dice score difference between the subject and predicate device is smaller than 3%. | (The document states "equivalent or had better performance than the predicate device" implicitly meeting this, but does not give a specific numerical difference.) |
New Organs for Subject Device | Baseline value defined by subtracting the reference value using 5% error margin in case of Dice and 0.1 mm in case of ASSD. The subject device in the selected reference metric has a higher value than the defined baseline value. | Regional Averages: |
Head & Neck: Dice 76.5% | ||
Head & Neck lymph nodes: Dice 69.2% | ||
Thorax: Dice 82.1% | ||
Abdomen: Dice 88.3% | ||
Pelvis: Dice 84.0% |
2. Sample Sizes Used for the Test Set and Data Provenance
- MR Contouring Algorithm Test Set:
- Sample Size: N = 66
- Data Provenance: Retrospective study, data from multiple clinical sites across North America & Europe. The document further breaks this down for different sequences:
- T1 Dixon W: 30 datasets (USA: 15, EU: 15)
- T2 W TSE: 36 datasets (USA: 25, EU: 11)
- Manufacturer: All Siemens Healthineers scanners.
- CT Contouring Algorithm Test Set:
- Sample Size: N = 414
- Data Provenance: Retrospective study, data from multiple clinical sites across North American, South American, Asia, Australia, and Europe. This dataset is distributed across three cohorts:
- Cohort A: 73 datasets (Germany: 14, Brazil: 59) - Siemens scanners only
- Cohort B: 40 datasets (Canada: 40) - GE: 18, Philips: 22 scanners
- Cohort C: 301 datasets (NA: 165, EU: 44, Asia: 33, SA: 19, Australia: 28, Unknown: 12) - Siemens: 53, GE: 59, Philips: 119, Varian: 44, Others: 26 scanners
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The ground truth annotations were "drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists."
- "Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
- The exact number of individual annotators or experts is not specified beyond "a team" and "a board-certified radiation oncologist." Their specific experience level (e.g., "10 years of experience") is not given beyond "experienced" and "board-certified."
4. Adjudication Method for the Test Set
- The document implies a consensus/adjudication process: "a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This suggests that initial annotations by the "experienced annotators" were reviewed and potentially corrected by a higher-level expert. The specific number of reviewers for each case (e.g., 2+1, 3+1) is not explicitly stated, but it was at least a "team" providing initial annotations followed by a "board-certified radiation oncologist" for quality assessment/correction.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, the document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The validation studies focused on the standalone performance of the algorithm against expert-defined ground truth.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, the performance validation described in section 10 ("Performance Software Validation") is a standalone (algorithm only) performance study. The metrics (Dice, ASSD, Fail Rate) compare the algorithm's output directly to the established ground truth. The device produces contours that must be reviewed and edited by trained medical professionals, but the validation tests the AI's direct output.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus/manual annotation. It was established by "manual annotation" by "experienced annotators mentored by radiologists or radiation oncologists" and subsequently reviewed and corrected by a "board-certified radiation oncologist." Annotation protocols followed NRG/RTOG guidelines.
8. The Sample Size for the Training Set
- MR Contouring Algorithm Training Set:
- T1 VIBE/Dixon W: 219 datasets
- T2 W TSE: 225 datasets
- Prostate (T2W): 960 datasets
- CT Contouring Algorithm Training Set: The training dataset sizes vary per organ group:
- Cochlea: 215
- Thyroid: 293
- Constrictor Muscles: 335
- Chest Wall: 48
- LN Supraclavicular, Axilla Levels, Internal Mammaries: 228
- Duodenum, Bowels, Sigmoid: 332
- Stomach: 371
- Pancreas: 369
- Pulmonary Artery, Vena Cava, Trachea, Spinal Canal, Proximal Bronchus: 113
- Ventricles & Atriums: 706
- Descending Coronary Artery: 252
- Penile Bulb: 854
- Uterus: 381
9. How the Ground Truth for the Training Set Was Established
- For both training and validation data, the ground truth annotations were established using the "Standard Annotation Process." This involved:
- Annotation protocols defined following NRG/RTOG guidelines.
- Manual annotations drawn by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool.
- A quality assessment including review and correction of each annotation by a board-certified radiation oncologist using validated medical image annotation tools.
- The document explicitly states that the "training data used for the training of the algorithm is independent of the data used to test the algorithm."
Ask a specific question about this device
(437 days)
MRI Planner is a software-only medical device intended for use by trained radiation oncologists, dosimetrists and physicists to process images from MRI systems to
- provide the operator with information of tissue properties for radiation attenuation estimation purposes in photon external beam radiotherapy treatment planning, and to
- derive contours for input to radiation treatment planning by assisting in localization and definition of healthy anatomical structures.
MRI Planner is not intended to automatically contour tumor clinical target volumes.
MRI Planner is indicated for radiotherapy planning of adult patients for primary and metastatic cancers in the brain and head-neck regions, as well as soft tissue cancers in the pelvic region.
MRI Planner generates synthetic CT images for radiation attenuation estimation purposes for the pelvis, brain and head-neck regions only. MRI Planner generates automatically derived contours of the bladder, colon and femoral heads, for prostate cancer patients only.
The product MRI Planner is a stand-alone software providing information to the treatment planning process prior to radiotherapy. Based on a DICOM MR image stack, the software generates synthetic CT images that can be used for attenuation calculations in radiotherapy treatment planning for the pelvis, brain and head-neck regions. In addition, the software also generates contours of anatomical structures in the MR image stack, to be used as a starting point for the manual delineation work required in radiotherapy treatment planning. Contours are generated for prostate cancer patients only (bladder, colon and femoral heads).
MRI Planner utilizes pre-trained machine learning models to perform both the conversion to synthetic CT and the automated structure contouring. The models for synthetic CT generation was trained using a dataset comprising MR and CT images for 244 patients acquired in the treatment position at four hospitals. The model for prostate cancer patient auto contouring was trained using a dataset comprising MR images for 175 patients acquired in the treatment position at four hospitals, together with in-house generated expert manual contours. MRI Planner does not display or store DICOM images. The user is advised to use existing softwares for radiotherapy treatment planning to display and modify generated images and contours.
MRI Planner runs on a standard x86-64 compatible system with a CUDA capable NVIDIA GPU and requires Ubuntu Linux 18.04 operating system.
This document describes the acceptance criteria and the studies conducted to prove that the MRI Planner device meets these criteria. The device is a software-only medical device intended for use by trained radiation oncologists, dosimetrists, and physicists for radiation attenuation estimation and contouring of healthy anatomical structures in radiotherapy treatment planning.
1. Table of Acceptance Criteria and Reported Device Performance
The performance of MRI Planner was evaluated through two main bench tests: a Dose Accuracy Bench Test for synthetic CT (sCT) generation and an Auto Contouring Bench Test for anatomical structure delineation.
Metric Category | Acceptance Criteria | Reported Device Performance |
---|---|---|
Dose Accuracy (Synthetic CT) | ||
Mean Target Dose Difference (sCT-CT) | Implied: Dosimetric agreement should be high, with minimal differences between sCT and conventional CT. Specific numerical criteria are not explicitly stated as "acceptance criteria," but the reported performance is compared against a general expectation of high accuracy. | Pelvis: Average sCT-CT mean target dose difference was 0.02% ± 0.31%. |
Head-Neck and Brain: Average sCT-CT mean target dose difference was -0.02% ± 0.25%. | ||
Non-Target sCT-CT Dose Difference | 99% of cases should have no sub-volumes with an sCT-CT dose difference in excess of 1.0 Gy or 5% of the CT-dose. | All cases: No cases displayed any sub-volumes with sCT-CT dose differences in excess of 5% or 1.0 Gy, meeting the 99% criteria. |
High Dose Gamma Evaluation (3%/3mm for Pelvis & Head-Neck; 2%/2mm for Brain) | Gamma index passing rate requirement: 99% for pelvis and head-neck, and 98% for brain. | Pelvis, Head-Neck, Brain: 100.0% of cases passed the individual passing rate criterion for all anatomical regions. |
Average gamma index passing rates: 99.9% (pelvis), 99.8% (head-neck), 99.8% (brain). (All surpassed the acceptance criteria) | ||
Medium Dose Gamma Evaluation (2%/2mm for all regions) | Gamma index passing rate requirement: 99% for all anatomical regions. | Pelvis: 98.3% of cases passed the individual passing rate criterion. |
Head-Neck and Brain: 100.0% of cases passed the individual passing rate criterion. | ||
Average gamma index passing rates: 99.7% (pelvis), 99.5% (head-neck), 99.9% (brain). (All surpassed the acceptance criteria, except for individual cases in pelvis fell slightly below 99% in individual passing rate criterion, but the average still meets.) | ||
Auto Contouring (Bladder, Colon, Femoral Heads for prostate cancer patients) | ||
Dice Score (DSC) (Higher is better) | Implied: High agreement between automatically generated and manual delineations. No specific numerical acceptance criteria explicitly stated; performance is compared to generally accepted high scores for medical image segmentation. | Bladder: 0.95 ± 0.03 |
Colon: 0.90 ± 0.04 | ||
Femoral Head: 0.96 ± 0.01 | ||
95% Hausdorff Distance (HD) (Lower is better) | Implied: Low spatial disagreement between automatically generated and manual delineations. No specific numerical acceptance criteria explicitly stated; performance is compared to generally accepted low distances for medical image segmentation. | Bladder: 2.69 ± 1.82 |
Colon: 4.96 ± 3.91 | ||
Femoral Head: 2.04 ± 0.49 |
2. Sample Sizes and Data Provenance
For Dose Accuracy Bench Test (Synthetic CT)
- Pelvis:
- Test Set Sample Size: 58 unique pelvis cancer patients.
- Data Provenance: MR (T2w) and CT images acquired in the treatment position at six different hospitals.
- Geographic Distribution: 41% of patient images from the US, 59% from outside the US.
- Demographics: 16% female, 84% male, age range 51-88 years.
- MRI Scanner Data: Acquired at six different MRI scanner models from two different vendors (1.5T and 3T field strengths).
- Head-Neck-Brain:
- Test Set Sample Size: 75 unique head-neck-brain cancer patients.
- Data Provenance: MR (T1-Dixon) and CT images acquired in the treatment position at four different hospitals.
- Geographic Distribution: 55% of patient images from the US, 45% from outside the US.
- Demographics: 39% female, 64% male, age range 41-85 years.
- MRI Scanner Data: Acquired at six different MRI scanner models from two different vendors (1.5T and 3T field strengths).
For Auto Contouring Bench Test
- Test Set Sample Size: 51 unique male prostate cancer patients.
- Data Provenance: MR (T2w) images acquired in the treatment position at five different hospitals.
- Geographic Distribution: 39% of patient images from the US, 61% from outside the US.
- Demographics: Male, age range 51-88 years.
3. Number of Experts and Qualifications for Ground Truth (Auto Contouring)
- Number of Experts: Two expert truthers.
- Qualifications: "Expert truthers" involved in the product development and training dataset generation. No specific professional qualifications (e.g., "Radiologist with X years of experience") are provided, but their involvement in product development and training data generation suggests specialized knowledge.
- Employment: Both truthers were employed by the manufacturer (Spectronic Medical AB) at the time of performing the manual delineations for the bench test.
4. Adjudication Method for Ground Truth (Auto Contouring)
- Adjudication Method: Consensus approach ("using the consensus approach"). This implies the two experts worked together to agree on the final ground truth delineations.
- Guidelines: The consensus was based on US clinical guidelines.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study to evaluate human readers' improvement with or without AI assistance. The studies performed are standalone performance evaluations against ground truth (for auto-contouring) and conventional CT (for synthetic CT dose accuracy).
6. Standalone Algorithm Performance
- Yes, standalone performance was done for both components.
- Synthetic CT Generation: The dosimetric bench tests directly evaluate the performance of the MRI Planner's generated synthetic CT images against conventional CT (which serves as a form of ground truth for dose calculation), without human intervention in the generation process.
- Auto Contouring: The auto-contouring bench tests evaluate the automatically generated delineations against expert manual delineations (ground truth), representing the algorithm's performance without human-in-the-loop during contour generation.
7. Type of Ground Truth Used
For Dose Accuracy Bench Test (Synthetic CT)
- Type of Ground Truth: Dosimetric agreement was evaluated by comparing the dose calculated from synthetic CT (sCT) with dose calculated from conventional CT (CT). In this context, the conventional CT images serve as the ground truth for accurate dose calculation and attenuation estimation.
For Auto Contouring Bench Test
- Type of Ground Truth: Expert consensus manual delineations. These refer to the manually generated contours of bladder, colon, and femoral heads by two expert truthers based on US clinical guidelines.
8. Sample Size for the Training Set
For Synthetic CT Generation
- Training Set Sample Size: 244 patients. This dataset comprised MR and CT images.
For Auto Contouring
- Training Set Sample Size: 175 patients. This dataset comprised MR images.
9. How Ground Truth for the Training Set was Established
For Synthetic CT Generation
- Ground Truth Establishment: The training data for synthetic CT generation comprised paired MR and CT images. The CT images inherently serve as the ground truth for tissue properties and attenuation values. These images were acquired in the treatment position at four hospitals.
For Auto Contouring
- Ground Truth Establishment: The training data for auto-contouring included MR images together with "in-house generated expert manual contours." This indicates that expert(s) within Spectronic Medical manually delineated the structures (bladder, colon, femoral heads) on the MR images to create the ground truth for training the model. The text also states that the truthers involved in the bench test were involved in the development of the product and the generation of the training dataset. It is specifically mentioned that these expert manual delineations for the training set were generated at a different time than those used for the bench test.
Ask a specific question about this device
(88 days)
ART-Plan is indicated for cancer patients for whom radiation treatment has been planned. It is intended to be used by trained medical professionals including, but not limited to, radiation oncologists, dosimetrists, and medical physicists.
ART-Plan is a software application intended to display and visualize 3D multi-modal medical image data. The user may import, define, display, transform and store DICOM3.0 compliant datasets (including regions of interest structures). These images, contours and objects can subsequently be exported/distributed within the system, across computer networks and/or to radiation treatment planning systems. Supported modalities include CT, PET-CT, CBCT, 4D-CT and MR images.
ART-Plan supports Al-based contouring on CT and MR images and offers semi-automatic and manual tools for segmentation.
To help the user assess changes in image data and to obtain combined multi-modal image information, ART-Plan allows the registration of anatomical and functional images and display of fused and non-fused images to facilitate the comparison of patient image data by the user.
With ART-Plan, users are also able to generate, visualize, evaluate and modify pseudo-CT from MRI images.
The ART-Plan application is comprised of two key modules: SmartFuse and Annotate, allowing the user to display and visualize 3D multi-modal medical image data. The user may process, render, review, store, display and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities cover static and gated CT (computerized tomography including CBCT and 4D-CT), PET (positron emission tomography) and MR (magnetic resonance).
Compared to ART-Plan v1.6.1 (primary predicate), the following additional features have been added to ART-Plan v1.10.0:
- · an improved version of the existing automatic segmentation tool
- · automatic segmentation on more anatomies and organ-at-risk
- image registration on 4D-CT and CBCT images .
- automatic segmentation on MR images .
- · generate synthetic CT from MR images
- a cloud-based deployment
The ART-Plan technical functionalities claimed by TheraPanacea are the following:
- . Proposing automatic solutions to the user, such as an automatic delineation, automatic multimodal image fusion, etc. towards improving standardization of processes/ performance / reducing user tedious / time consuming involvement.
- . Offering to the user a set of tools to assist semi-automatic delineation, semi-automatic registration towards modifying/editing manually automatically generated structures and adding/removing new/undesired structures or imposing user-provided correspondences constraints on the fusion of multimodal images.
- . Presenting to the user a set of visualization methods of the delineated structures, and registration fusion maps.
- . Saving the delineated structures / fusion results for use in the dosimetry process.
- . Enabling rigid and deformable registration of patients images sets to combine information contained in different or same modalities.
- Allowing the users to generate, visualize, evaluate and modify pseudo-CT from MRI images.
ART-Plan offers deep-learning based automatic segmentation for the following localizations:
- head and neck (on CT images) .
- . thorax/breast (for male/female and on CT images)
- abdomen (on CT images and MR images) ●
- . pelvis male(on CT images and MR images)
- . pelvis female (on CT images)
- brain (on CT images and MR images)
ART-Plan offers deep-learning based synthetic CT-generation from MR images for the following localizations:
- pelvis male .
- brain
Here's a summary of the acceptance criteria and study details for the ART-Plan device, extracting information from the provided text:
Acceptance Criteria and Device Performance
Criterion Category | Acceptance Criteria | Reported Device Performance |
---|---|---|
Auto-segmentation - Dice Similarity Coefficient (DSC) | DSC (mean) ≥ 0.8 (AAPM standard) OR DSC (mean) ≥ 0.54 or DSC (mean) ≥ mean(DSC inter-expert) + 5% (inter-expert variability) | Multiple tests passed demonstrating acceptable contours, exceeding AAPM standards in some cases (e.g., Abdo MRI auto-segmentation), and meeting or exceeding inter-expert variability for others (e.g., Brain MR, Pelvis MRI). For Brain MRI, initially some organs did not meet 0.8 but eventually passed with further improvements and re-evaluation against inter-expert variability. All organs for all anatomies met at least one acceptance criterion. |
Auto-segmentation - Qualitative Evaluation | Clinicians' qualitative evaluation of auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications/corrections (B), with A+B % ≥ 85%. | For all tested organs and anatomies, the qualitative evaluation resulted in A+B % ≥ 85%, indicating that clinicians found the contours acceptable for clinical use with minor or no modifications. For example, Pelvis Truefisp model achieved ≥ 85% A or B, and H&N Lymph nodes also met this. |
Synthetic-CT Generation | A median 2%/2mm gamma passing criteria of ≥ 95% OR A median 3%/3mm gamma passing criteria of ≥ 99.0% OR A mean dose deviation (pseudo-CT compared to standard CT) of ≤ 2% in ≥ 88% of patients. | For both pelvis and brain synthetic-CT, the performance met these acceptance criteria and demonstrated non-inferiority to previously cleared devices. |
Fusion Performance | Not explicitly stated with numerical thresholds, but evaluated qualitatively. | Both rigid and deformable fusion algorithms provided clinically acceptable results for major clinical use cases in radiotherapy workflows, receiving "Passed" in all relevant studies. |
Study Details
-
Sample Size used for the test set and the data provenance:
- Test Set Sample Size: The exact number of patients in the test set is not explicitly given as a single number but is stated that for structures of a given anatomy and modality, two non-overlapping datasets were separated: test patients and train data. The number of test patients was "selected based on thorough literature review and statistical power."
- Data Provenance: Real-world retrospective data, initially used for treatment of cancer patients. Pseudo-anonymized by the centers providing data before transfer. Data was sourced from both non-US and US populations.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Varies. For some tests (e.g., Abdo MRI auto-segmentation, Brain MRI autosegmentation, Pelvis MRI auto-segmentation), at least 3 different experts were involved for inter-expert variability calculations. For the qualitative evaluations, it implies multiple clinicians or medical physicists.
- Qualifications of Experts: Clinical experts, medical physicists (for validation of usability and performance tests) with expertise level comparable to a junior US medical physicist and responsibilities in the radiotherapy clinical workflow.
-
Adjudication method for the test set:
- The document describes a "truthing process [that] includes a mix of data created by different delineators (clinical experts) and assessment of intervariability, ground truth contours provided by the centers and validated by a second expert of the center, and qualitative evaluation and validation of the contours." This suggests a multi-reader approach, potentially with consensus or an adjudicator for ground truth, but a specific "2+1" or "3+1" method is not detailed. The "inter-expert variability" calculation implies direct comparison between multiple experts' delineations of the same cases.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A direct MRMC comparative effectiveness study with human readers improving with AI vs without AI assistance is not explicitly described in the provided text. The studies focus on the standalone performance of the AI algorithm against established criteria (AAPM, inter-expert variability, qualitative acceptance) and non-inferiority to other cleared devices.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance evaluation of the algorithm was done. The acceptance criteria and performance data are entirely based on the algorithm's output (e.g., DSC, gamma passing criteria, dose deviation) compared to ground truth or existing standards, and qualitative assessment by experts of the algorithm's generated contours.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth used primarily involved:
- Expert Consensus/Delineation: Contours created by different clinical experts and assessed for inter-variability.
- Validated Ground Truth Contours: Contours provided by the centers and validated by a second expert from the same center.
- Qualitative Evaluation: Clinical review and validation of contours.
- Dosimetric Measures: For synthetic-CT; comparison to standard CT dose calculations.
- The ground truth used primarily involved:
-
The sample size for the training set:
- Training Patients: 8,736 patients.
- Training Samples (Images/Anatomies/Structures): 299,142 samples. (One patient can have multiple images, and each image multiple delineated structures).
-
How the ground truth for the training set was established:
- "The contouring guidelines followed to produce the contours were confirmed with the centers which provided the data. Our truthing process includes a mix of data created by different delineators (clinical experts) and assessment of intervariability, ground truth contours provided by the centers and validated by a second expert of the center, and qualitative evaluation and validation of the contours." This indicates that the ground truth for the training set was established through a combination of expert delineation, internal validation by a second expert, adherence to established guidelines, and assessment of variability among experts.
Ask a specific question about this device
Page 1 of 1