Search Results
Found 4 results
510(k) Data Aggregation
(197 days)
AI-Rad Companion Organs RT
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions.
AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.
CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).
AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).
Table 3: Acceptance Criteria of AIRC Organs RT VA50
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Summary) |
---|---|---|
Organs in Predicate Device | All organs segmented in the predicate device are also segmented in the subject device. | Confirmed. The device continued to segment all organs previously handled by the predicate. |
The average (AVG) Dice score difference between the subject and predicate device is |
Ask a specific question about this device
(198 days)
AI-Rad Companion Organs RT
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR predefined structures using deep-leaming-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The output of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions.
AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.
CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOM-RTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the TPS, which is the standard location for the planning of radiation therapy).
The output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals. The qualified medical professional can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for AI-Rad Companion Organs RT:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria and reported performance are detailed for both MR and CT contouring algorithms.
MR Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
MR Contouring Organs | The average segmentation accuracy (Dice value) of all subject device organs should be equivalent or better than the overall segmentation accuracy of the predicate device. The overall fail rate for each organ/anatomical structure is smaller than 15%. | Dice [%]: 85.75% (95% CI: [82.85, 87.58]) |
ASSD [mm]: 1.25 (95% CI: [0.95, 2.02]) | ||
Fail [%]: 2.75% | ||
(Compared to Reference Device MRCAT Pelvis (K182888)) | AI-Rad Companion Organs RT VA50 – all organs: 86% (83-88) | |
AI-Rad Companion Organs RT VA50 – common organs: 82% (78-84) | ||
MRCAT Pelvis (K182888) – all organs: 77% (75-79) |
CT Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
Organs in Predicate Device | All the organs segmented in the predicate device are also segmented in the subject device. The average (AVG) Dice score difference between the subject and predicate device is smaller than 3%. | (The document states "equivalent or had better performance than the predicate device" implicitly meeting this, but does not give a specific numerical difference.) |
New Organs for Subject Device | Baseline value defined by subtracting the reference value using 5% error margin in case of Dice and 0.1 mm in case of ASSD. The subject device in the selected reference metric has a higher value than the defined baseline value. | Regional Averages: |
Head & Neck: Dice 76.5% | ||
Head & Neck lymph nodes: Dice 69.2% | ||
Thorax: Dice 82.1% | ||
Abdomen: Dice 88.3% | ||
Pelvis: Dice 84.0% |
2. Sample Sizes Used for the Test Set and Data Provenance
- MR Contouring Algorithm Test Set:
- Sample Size: N = 66
- Data Provenance: Retrospective study, data from multiple clinical sites across North America & Europe. The document further breaks this down for different sequences:
- T1 Dixon W: 30 datasets (USA: 15, EU: 15)
- T2 W TSE: 36 datasets (USA: 25, EU: 11)
- Manufacturer: All Siemens Healthineers scanners.
- CT Contouring Algorithm Test Set:
- Sample Size: N = 414
- Data Provenance: Retrospective study, data from multiple clinical sites across North American, South American, Asia, Australia, and Europe. This dataset is distributed across three cohorts:
- Cohort A: 73 datasets (Germany: 14, Brazil: 59) - Siemens scanners only
- Cohort B: 40 datasets (Canada: 40) - GE: 18, Philips: 22 scanners
- Cohort C: 301 datasets (NA: 165, EU: 44, Asia: 33, SA: 19, Australia: 28, Unknown: 12) - Siemens: 53, GE: 59, Philips: 119, Varian: 44, Others: 26 scanners
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The ground truth annotations were "drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists."
- "Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
- The exact number of individual annotators or experts is not specified beyond "a team" and "a board-certified radiation oncologist." Their specific experience level (e.g., "10 years of experience") is not given beyond "experienced" and "board-certified."
4. Adjudication Method for the Test Set
- The document implies a consensus/adjudication process: "a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This suggests that initial annotations by the "experienced annotators" were reviewed and potentially corrected by a higher-level expert. The specific number of reviewers for each case (e.g., 2+1, 3+1) is not explicitly stated, but it was at least a "team" providing initial annotations followed by a "board-certified radiation oncologist" for quality assessment/correction.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, the document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The validation studies focused on the standalone performance of the algorithm against expert-defined ground truth.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, the performance validation described in section 10 ("Performance Software Validation") is a standalone (algorithm only) performance study. The metrics (Dice, ASSD, Fail Rate) compare the algorithm's output directly to the established ground truth. The device produces contours that must be reviewed and edited by trained medical professionals, but the validation tests the AI's direct output.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus/manual annotation. It was established by "manual annotation" by "experienced annotators mentored by radiologists or radiation oncologists" and subsequently reviewed and corrected by a "board-certified radiation oncologist." Annotation protocols followed NRG/RTOG guidelines.
8. The Sample Size for the Training Set
- MR Contouring Algorithm Training Set:
- T1 VIBE/Dixon W: 219 datasets
- T2 W TSE: 225 datasets
- Prostate (T2W): 960 datasets
- CT Contouring Algorithm Training Set: The training dataset sizes vary per organ group:
- Cochlea: 215
- Thyroid: 293
- Constrictor Muscles: 335
- Chest Wall: 48
- LN Supraclavicular, Axilla Levels, Internal Mammaries: 228
- Duodenum, Bowels, Sigmoid: 332
- Stomach: 371
- Pancreas: 369
- Pulmonary Artery, Vena Cava, Trachea, Spinal Canal, Proximal Bronchus: 113
- Ventricles & Atriums: 706
- Descending Coronary Artery: 252
- Penile Bulb: 854
- Uterus: 381
9. How the Ground Truth for the Training Set Was Established
- For both training and validation data, the ground truth annotations were established using the "Standard Annotation Process." This involved:
- Annotation protocols defined following NRG/RTOG guidelines.
- Manual annotations drawn by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool.
- A quality assessment including review and correction of each annotation by a board-certified radiation oncologist using validated medical image annotation tools.
- The document explicitly states that the "training data used for the training of the algorithm is independent of the data used to test the algorithm."
Ask a specific question about this device
(162 days)
AI-Rad Companion Organs RT
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT imaging data using deep-learning-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The output of AI-Rad Companion Organs RT in the format of RTSTRUCT objects are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions. Only DICOM images of adult patients are considered to be valid input.
AI-Rad Companion Organs RT is a post-processing software used to automatically contour DICOM CT imaging data using deep-learning-based algorithms. AI-Rad Companion Organs RT contouring workflow supports CT input data and produces RTSTRUCT outputs. The configuration of the organ database and organ templates defining the organs and structures to be contoured based on the input DICOM data is managed via a configuration interface. Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning.
The output of AI-Rad Companion Organs RT, in the form of RTSTRUCT objects, are intended to be used by trained medical professionals. The output of AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT application.
At a high-level, AI-Rad Companion Organs RT includes the following functionality:
-
- Automated contouring of Organs at Risk (OAR) workflow
- a. Input -DICOM CT
- b. Output DICOM RTSTRUCT
-
- Organ Templates configuration (incl. Organ Database)
-
- Web-based preview of contouring results to accept or reject the generated contours
Here's a breakdown of the acceptance criteria and study details for the AI-Rad Companion Organs RT device, based on the provided text:
1. Table of Acceptance Criteria & Reported Device Performance:
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Median) |
---|---|---|
Organs in Predicate Device | 1. All organs segmented in the predicate device are also segmented in the subject device. | Met (all predicate organs are segmented in the subject device, implied by comparison tables). |
2. The lower bound of the 95th percentile CI of the segmentation (subject device) is greater than 0.1 Dice lower than the mean of the predicate device segmentation. | DICE: Subject: 0.85 (CI: [80.23, 84.61]) vs. Predicate: 0.85 (implied CI close to median). The statement "performance of the subject device and predicate device are comparable in DICE and ASSD" implies this criterion was met. | |
ASSD: Subject: 0.93 (CI: [0.86, 1.14]) vs. Predicate: 0.94 (implied CI close to median). The statement "performance of the subject device and predicate device are comparable in DICE and ASSD" implies this criterion was met. | ||
Head & Neck Lymph Nodes | 1. The overall fail rate of each organ/anatomical structure is smaller than 15%. | Not explicitly stated for each organ/anatomical structure, but generally implied by acceptable DICE and ASSD. |
2. The lower bound of the 95th percentile CI of the segmentation (subject device) is greater than 0.1 Dice lower than the mean of the reference device segmentation. | DICE: Subject (Head and Neck lymph node class): Avg 81.32 (CI: [80.32, 82.12]) vs. Reference (Pelvic lymph node class): Avg 80. The statement "performance of the subject device for non-overlapping organs is comparable in DICE to the reference device" and the specific values show that 80.32 is not more than 0.1 lower than 80 (it's higher by 0.32), so this criterion appears met. | |
ASSD: Subject (Head and Neck lymph node class): Avg 1.06 (CI: [0.99, 1.19]) vs. Reference: N.A. (No direct comparison for ASSD). |
Note: The text did not explicitly state the "fail rate" for the Head & Neck Lymph Nodes, only that it should be "smaller than 15%". The conclusion implies all acceptance criteria were met. The confidence intervals for the predicate device's DICE and ASSD are missing in Table 4, but the statement "performance of the subject device and predicate device are comparable" suggests the criteria were acceptable.
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: N = 113 retrospective performance study on CT data.
- This N=113 is composed of:
- Cohort A: 73 subjects (14 from Germany, 59 from Brazil)
- Cohort B: 40 subjects (Canada: 40)
- This N=113 is composed of:
- Data Provenance: Multiple clinical sites across North America (Canada) and Europe (Germany, Brazil – often considered part of South America, but grouped with "Europe" in the text for data collection context). The study used previously acquired CT data (retrospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not explicitly stated as a specific number. The text mentions "a team of experienced annotators" and "a board-certified radiation oncologist".
- Qualifications:
- Annotators: "experienced annotators mentored by radiologists or radiation oncologists".
- Review/Correction: "board-certified radiation oncologist".
4. Adjudication Method for the Test Set:
- The ground truth annotations were drawn manually by a team of experienced annotators and then underwent a "quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist". This suggests a method where initial annotations are created by multiple individuals and then reviewed/corrected by a single, highly qualified expert. This could be interpreted as a form of expert review/adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No, a MRMC comparative effectiveness study was not explicitly stated as having been done. The performance evaluation focused on comparing the AI algorithm's output to expert-generated ground truth and comparing the device's performance to predicate/reference devices, not on how human readers improve with or without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance study was done. The study "validated the AI-Rad Companion Organs RT software from clinical perspective" by evaluating its auto-contouring algorithm, and calculating metrics like DICE coefficients and ASSD against ground truth annotations. The device's output "must be used in conjunction with appropriate software... to review, edit, and accept contours", indicating its standalone output is then reviewed by a human, but the validation of its generation of contours is standalone.
7. The Type of Ground Truth Used:
- Expert Consensus/Manual Annotation with Expert Review (following guidelines): "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation. The ground truth annotations were drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool. Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist..." This indicates a robust expert-derived ground truth.
8. The Sample Size for the Training Set:
- 160 datasets (for Head & Neck specifically, other organs might have different training data, but this is the only training set sample size provided).
9. How the Ground Truth for the Training Set was Established:
- "In both the annotation process for the training and validation testing data, the annotation protocols for the OAR were defined following the NRG/RTOG guidelines. The ground truth annotations were drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool. Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
- This is the same process as for the test set, ensuring consistency in ground truth establishment.
Ask a specific question about this device
(319 days)
AI-Rad Companion Organs RT
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT imaging data using deep-learning-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The output of AI-Rad Companion Organs RT in the format of RTSTRUCT objects are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions. Only DICOM images of adult patients are considered to be valid input.
AI-Rad Companion Organs RT is a post-processing software used to automatically contour DICOM CT imaging data using deep-learning-based algorithms. AI-Rad Companion Organs RT contouring workflow supports CT input data and produces RTSTRUCT outputs. The configuration of the organ database and organ templates defining the organs and structures to be contoured based on the input DICOM data is managed via a configuration interface. Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning.
The output of AI-Rad Companion Organs RT, in the form of RTSTRUCT objects, are intended to be used by trained medical professionals. The output of AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT application.
At a high-level, AI-Rad Companion Organs RT includes the following functionality:
- Automated contouring of Organs at Risk (OAR) workflow
a. Input -DICOM CT
b. Output DICOM RTSTRUCT - Organ Templates configuration (incl. Organ Database)
- Web-based preview of contouring results to accept or reject the generated contours
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the AI-Rad Companion Organs RT are implicitly tied to demonstrating performance comparable to the predicate device, AccuContour, specifically in terms of contouring accuracy.
Metric | Acceptance Criteria (based on AccuContour) | Reported Device Performance (AI-Rad Companion Organs RT VA20) |
---|---|---|
DICE Coefficient | 0.85 – 0.95 | MED: 0.85 |
95% Hausdorff Distance | ≤ 3.5 mm | MED: 2.0 mm |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 113 cases
- Data Provenance: Retrospective CT data previously acquired for RT treatment planning from multiple clinical sites across North America and Europe. The subcohort analysis also included CT data from multiple vendors (GE, Siemens, Phillips).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document states: "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation." It does not specify the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). However, the phrase "following RTOG and clinical guidelines using manual annotation" implies establishment by qualified medical professionals experienced in radiation therapy contouring.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It only states that ground truth annotations were established via "manual annotation" following guidelines. This suggests a single expert or a consensus process without a detailed breakdown of the adjudication procedure in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported as being done in this document. The study focuses on the standalone performance of the AI algorithm against a manual ground truth and a comparison to a predicate device's reported performance, not on how human readers' performance improves with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance study of the algorithm was done. The reported DICE coefficient and Hausdorff Distance values directly assess the algorithm's output against the ground truth without human intervention in the contouring process itself. The "output of AI-Rad Companion Organs RT... are intended to be used by trained medical professionals" who "review, edit, and accept contours generated by AI-Rad Companion Organs RT," but the performance metrics provided are for the initial automated contouring.
7. The Type of Ground Truth Used
The ground truth used was expert consensus / manual annotation following RTOG and clinical guidelines.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It only discusses the validation set (113 cases).
9. How the Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for the training set was established. It only describes the ground truth establishment for the test/validation set: "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation."
Ask a specific question about this device
Page 1 of 1