K Number
K260509

Validate with FDA (Live)

Manufacturer
Date Cleared
2026-03-19

(30 days)

Product Code
Regulation Number
892.2050
Age Range
22 - 120
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.

Device Description

As with AutoContour Model RADAC V4, the AutoContour Model RADAC V5 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning-based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen, and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data that can be imported into a radiation therapy treatment planning system.

AutoContour Model RADAC V5 consists of 3 main components:

  1. A .NET client application designed to run on the Windows Operating System, allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
  2. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
  3. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.
AI/ML Overview

Here's a structured summary of the acceptance criteria and study details for the AutoContour Model RADAC V5, based on the provided FDA 510(k) clearance letter:


Acceptance Criteria and Device Performance Study for AutoContour Model RADAC V5

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria for each structure model varied based on its size (Large, Medium, Small) and whether it was a new model, an updated model, or an unchanged existing model. The performance was primarily evaluated through Dice Similarity Coefficient (DSC) and Likert Qualitative Review for new/updated models, and DSC and Hausdorff Distance for existing models.

Metric TypeAcceptance Criteria (Large, Medium, Small Structures)Reported CT Training Data Performance (Mean DSC ± Std Dev)Reported MR Training Data Performance (Mean DSC ± Std Dev)Reported CT External Reviewer Performance (Mean DSC)Reported MR External Reviewer Performance (Mean DSC)Reported External Reviewer Qualitative Performance (Average Rating)
DSC Evaluation (Training/External Dataset)Large: ≥ 0.80Medium: ≥ 0.65Small: ≥ 0.50Large: 0.91 ± 0.14Medium: 0.86 ± 0.13Small: 0.75 ± 0.20Medium: 0.82 ± 0.12Small: 0.72 ± 0.09Large: 0.94 (A_Aorta)Medium: 0.91 (A_Aorta_Asc)Small: 0.78 (A_Celiac)Medium: 0.93 (Brainstem)Small: 0.81 (NVB_L)N/A
Likert Qualitative Review (Internal/External)Average grade ≥ 3 across all external image setsN/AN/AN/AN/A4.3 (across all MR models)4.8 (e.g., A_Aorta)Min. 3.9 (HDR_Bowel - for single structure failing DSC)
Existing Structure Model DSC ComparisonLarge: > 0.99Medium: > 0.98Small: > 0.95(This metric compared new version to previous, not absolute values)(This metric compared new version to previous, not absolute values)N/AN/AN/A
Existing Structure Model Hausdorff Distance≤ 3mm(This metric compared new version to previous, not absolute values)(This metric compared new version to previous, not absolute values)N/AN/AN/A

Note: The document provides specific DSC values for many individual structures. The table above shows aggregated or illustrative examples from the tables provided.

2. Sample Size for Test Set and Data Provenance

  • CT Test Sets: An average of 49 testing image sets per CT structure model (approximately 10% of training data). Specific examples include:
    • A_Aorta_Asc (Update): 60 testing sets
    • A_Carotid_L/R (Update): 83 testing sets
    • A_Celiac: 44 testing sets
  • MR Test Sets:
    • Brain models: 58 testing image sets (e.g., Amygdala_L/R: 133, CorpusCallosum: 15)
    • Pelvis models: 50 testing image sets (e.g., Rectal_Spacer: 26)
  • External Clinical Test Sets:
    • CT: 20 (A_Aorta), 37 (A_Carotid_L/R), 24 (A_Celiac), etc.
    • MR: 20 (Amygdala_L), 45 (Bladder_Trigone), 7 (HDR_Bowel), etc.
  • Data Provenance (Training and Testing): Data was gathered from several institutions in several different countries (not specifically enumerated but mentioned for CT and MR). Specific external clinical datasets for CT included TCIA - Pelvic-Ref, TCIA - Head-Neck-PET-CT, TCIA - Pancreas-CT-CB, TCIA - NSCLC data. MR external datasets included "MR - Renown," "Gold Atlas Pelvis," "SynthRad," "MRLinac Pelvis," "Female HDR MR Pelvis," and "MR Pelvis Barrigel," some of which were open-source or shared by clinical partners/institutions in Canada, Spain, Australia, and the United States. The images used for testing were sequestered from the original training and validation data population and removed from the training dataset pool before model training began.

3. Number of Experts and Qualifications for Ground Truth

  • Number of Experts: Six (6) clinically experienced experts.
  • Qualifications: 2 radiation therapy physicists, 1 radiation dosimetrist, and 3 radiation therapists with specialized training in radiation therapy contouring.

4. Adjudication Method for the Test Set

The ground truthing of each test dataset was generated manually using consensus (NRG/RTOG/ESTRO) guidelines as appropriate. While a specific (e.g., 2+1, 3+1) adjudication method for individual cases or disagreements is not explicitly stated, the use of "consensus" guidelines by multiple experts implies a form of adjudicated agreement for final ground truth.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • The document does not explicitly describe a conventional MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance.
  • Instead, it measures the AI's standalone performance against expert-generated ground truth and uses a qualitative review by external experts (average rating 1-5 where >3 means beneficial, 5 means no edits needed) to assess the clinical appropriateness and required modifications for the AI-generated contours. This qualitative review serves as an indirect assessment of human interaction with AI output, but not a formal MRMC study as typically defined for reader performance improvement with assistance.

6. Standalone Performance (Algorithm Only without Human-in-the-loop)

  • Yes, standalone performance was done. The primary performance metrics (Dice Similarity Coefficient - DSC and Hausdorff Distance) directly evaluate the algorithm's output against the expert-generated ground truth without human intervention in the contour generation process. The "Training DSC Evaluation" and "External Dataset DSC Evaluation" explicitly refer to the model's direct output.
  • The qualitative review by external experts, while involving human assessment, is done after the algorithm has generated its standalone contours, effectively evaluating the standalone output's clinical utility.

7. Type of Ground Truth Used

  • Expert Consensus: Ground truth for both training and test sets was established manually by six clinically experienced experts following consensus guidelines (NRG/RTOG/ESTRO).

8. Sample Size for the Training Set

  • CT Training Sets: An average of 459 training image sets per CT structure model. Specific examples:
    • A_Aorta_Asc (Update): 240
    • A_Carotid_L/R (Update): 328
    • A_Celiac: 435
  • MR Training Sets:
    • Brain models: An average of 259 training image sets.
    • Pelvis models: An average of 243 training image sets.
    • Specific examples: Amygdala_L/R: 493, CorpusCallosum: 56, Rectal_Spacer: 233.

9. How Ground Truth for Training Set was Established

  • The ground truth for the training set was established manually by the same group of six clinically experienced experts (2 radiation therapy physicists, 1 radiation dosimetrist, and 3 radiation therapists with specialized training in radiation therapy contouring) using consensus guidelines (NRG/RTOG/ESTRO).

U.S. Food & Drug Administration 510(k) Clearance Letter

Page 1

U.S. Food & Drug Administration
10903 New Hampshire Avenue
Silver Spring, MD 20993
www.fda.gov

Doc ID # 04017.08.04

March 19, 2026

Radformation, Inc.
Jennifer Wampler
Senior Regulatory Affairs Specialist
261 Madison Ave.
9th Floor
New York, New York 10016

Re: K260509
Trade/Device Name: AutoContour Model RADAC V5
Regulation Number: 21 CFR 892.2050
Regulation Name: Medical Image Management And Processing System
Regulatory Class: Class II
Product Code: QKB
Dated: February 13, 2026
Received: February 17, 2026

Dear Jennifer Wampler:

We have reviewed your section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (the Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database available at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.

If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.

Page 2

K260509 - Jennifer Wampler Page 2

Additional information about changes that may require a new premarket notification are provided in the FDA guidance documents entitled "Deciding When to Submit a 510(k) for a Change to an Existing Device" (https://www.fda.gov/media/99812/download) and "Deciding When to Submit a 510(k) for a Software Change to an Existing Device" (https://www.fda.gov/media/99785/download).

Your device is also subject to, among other requirements, the Quality Management System Regulation (QMSR) (21 CFR Part 820), which includes, but is not limited to, ISO 13485 clause 7.3 (Design controls), ISO 13485 clause 8.3 (Nonconforming product), ISO 13485 clause 8.5.2 (Corrective action), and ISO 13485 clause 8.5.3 (Preventative action). Please note that regardless of whether a change requires premarket review, the QMSR requires device manufacturers to review and approve changes to device design and production (ISO 13485 clause 7.3 and ISO 13485 clause 7.5) and document changes and approvals in the Medical Device File (ISO 13485 clause 4.2.3).

Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting (reporting of medical device-related adverse events) (21 CFR Part 803) for devices or postmarketing safety reporting (21 CFR Part 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reporting-combination-products); good manufacturing practice requirements as set forth in the Quality Management System Regulation (QMSR) (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR Part 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR Parts 1000-1050.

All medical devices, including Class I and unclassified devices and combination product device constituent parts are required to be in compliance with the final Unique Device Identification System rule ("UDI Rule"). The UDI Rule requires, among other things, that a device bear a unique device identifier (UDI) on its label and package (21 CFR 801.20(a)) unless an exception or alternative applies (21 CFR 801.20(b)) and that the dates on the device label be formatted in accordance with 21 CFR 801.18. The UDI Rule (21 CFR 830.300(a) and 830.320(b)) also requires that certain information be submitted to the Global Unique Device Identification Database (GUDID) (21 CFR Part 830 Subpart E). For additional information on these requirements, please see the UDI System webpage at https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/unique-device-identification-system-udi-system.

Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-devices/medical-device-safety/medical-device-reporting-mdr-how-report-medical-device-problems.

For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-

Page 3

K260509 - Jennifer Wampler Page 3

assistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).

Sincerely,

Lora D. Weidner, Ph.D.
Assistant Director
Radiation Therapy Team
DHT8C: Division of Radiological
Imaging and Radiation Therapy Devices
OHT8: Office of Radiological Health
Office of Product Evaluation and Quality
Center for Devices and Radiological Health

Enclosure

Page 4

Indications for Use

Please type in the marketing application/submission number, if it is known. This textbox will be left blank for original applications/submissions.

K260509

Please provide the device trade name(s).

AutoContour Model RADAC V5

Please provide your Indications for Use below.

AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.

Please select the types of uses (select one or both, as applicable).
☑ Prescription Use (21 CFR 801 Subpart D)
☐ Over-The-Counter Use (21 CFR 801 Subpart C)

Please select the age group(s) for which the device(s) is to be used.
☐ Neonates/Newborns (Birth to < 29 days old)
☐ Infants (29 days old to < 2 years old)
☐ Children (2 years old to < 12 years old)
☐ Adolescents (12 years old to < 22 years old)
☑ Adults (22 years old and greater)

Page 5

510(k) Summary - AutoContour Model RADAC V5

This 510(k) Summary has been created per the requirements of the Safe Medical Device Act (SMDA) of 1990, and the content is provided in conformance with 21 CFR Part 807.92.

1. Submitter's Information

Table 1: Submitter's Information

FieldInformation
Submitter's Name:Kevin Robinson
Company:Radformation, Inc.
Address:261 Madison Avenue, 9th FloorNew York, NY 10016
Contact Person:Kevin RobinsonVP of Regulatory Affairs, Radformation
Phone:585-500-6996
Fax:
Email:regulatory@radformation.com
Date of Summary Preparation2/13/2026

2. Device Information

Table 2: Device Information

FieldInformation
Trade Name:AutoContour Model RADAC V5
Common Name:AutoContour, AutoContouring, AutoContour Agent,AutoContour Cloud Server
Classification Name:Class II
Classification:Medical image management and processing system
Regulation Number:892.2050
Product Code:QKB
Classification Panel:Radiology

Page 6

3. Predicate Device Information

Predicate Device

AutoContour Model RADAC V5 (Subject Device) makes use of its prior submissions - AutoContour Model RADAC V4 (K242729) as the Predicate Device. The Indications for Use, patient population, functionality and technical components of this Predicate Device remain unchanged in AutoContour Model RADAC V5. This submission is intended to build on the functionality and technological components of the 510(k) cleared AutoContour Model RADAC V4.

Reference Device

AutoContour Model RADAC V5 also makes use of Limbus Contour (K241837) as a reference device for the new structure models from Limbus Contour that have been integrated into this version. Limbus Contour is a Radformation product.

4. Device Description

As with AutoContour Model RADAC V4, the AutoContour Model RADAC V5 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning-based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen, and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data that can be imported into a radiation therapy treatment planning system.

AutoContour Model RADAC V5 consists of 3 main components:

  1. A .NET client application designed to run on the Windows Operating System, allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.

  2. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.

  3. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.

5. Indications for Use

AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.

Page 7

6. Technological Characteristics

AutoContour RADAC V5 (Subject Device) makes use of its prior submission - AutoContour RADCC V2 (K242729), a Predicate Device. The Indications for Use, patient population, functionality, and technical components of this Predicate Device remain unchanged in AutoContour RADAC V5. The main UI outputs are equivalent to the Predicate Device as well, allowing the user to properly visualize and analyze the calculations. This submission is intended to build on the functionality and technological components of the 510(k) cleared AutoContour RADAC V4.

Table 11: Substantial Equivalence AutoContour Model RADAC V5 vs. AutoContour Model RADAC V4 (K242729)

CharacteristicSubject Device:AutoContour ModelRADAC V5Predicate Device:AutoContour ModelRADAC V4 (K242729)Reference DeviceLimbus Contour (K241837)Used for Verification
AutoContour vs. Predicate Devices: Technological Characteristics
Indications for UseAutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planningAutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planningLimbus Contour is a software-only medical device intended for use by trained radiation oncologists, dosimetrists, and physicists to derive optimal contours for input to radiation treatment planning.
Target PopulationAny patient type for whom relevant modality scan data is available.Any patient type for whom relevant modality scan data is available.Any patient type for whom relevant modality scan data is available.
Energy Used and/or DeliveredNone – software-only application. The software application does not deliver or depend on energy delivered to or from patientsNone – software-only application. The software application does not deliver or depend on energy delivered to or from patientsNone – software-only application. The software application does not deliver or depend on energy delivered to or from patients
Intended usersTrained radiation oncology personnelTrained radiation oncology personnelTrained radiation oncology personnel
Design: Data Visualization/Graphical User InterfaceContains both an automated processing component and Data Visualization / Graphical User InterfaceContains both an automated processing component and Data Visualization / Graphical User InterfaceContains an automated processing component.
Design: View manipulation and Volume renderingWindow and level, pan, zoom, cross-hairs, slice navigation, fused views.Window and level, pan, zoom, cross-hairs, slice navigation, fused views.None

Page 8

| Design: Image registration | Manual and Automatic Rigid registration. Automatic Deformable Registration | Manual and Automatic Rigid registration. Automatic Deformable Registration | None |

AutoContour vs. Predicate and Reference: Model Comparison

| Regions and Volumes of interest (ROI) | CT or MR input for contouring of anatomical regions: Head and Neck, Thorax, Abdomen and Pelvis.Machine learning based contouring of 420 CT-based and 62 MR-based models and manual ROI manipulationCT Models:• A_Aorta• A_Aorta_Asc• A_Aorta_Dsc• A_Brachiocephls• A_Carotid_L• A_Carotid_R• A_Celiac*• A_Circumflex_L• A_Coronary_2d_R• A_Coronary_L• A_Coronary_R• A_LAD• A_Mesenteric_S*• A_Pulmonary*• A_Subclavian_L• A_Subclavian_R• Atrium_L*• Atrium_R*• AV_Node• Barrigel™• BileDuct_Common• Bladder*• Bladder_CBCT*• Bladder_F*• Body*• Body+Mask*• Bone_Hyoid*• Bone_Ilium*• Bone_Ilium_L*• Bone_Ilium_R*• Bone_Ischium_L*• Bone_Ischium_R*• Bone_Mandible*• Bone_Pelvic*• Bone_Pterygoid_L• Bone_Pterygoid_R | CT or MR input for contouring of anatomical regions: Head and Neck, Thorax, Abdomen and Pelvis.Machine learning based contouring of 260 CT-based and 35 MR-based models and manual ROI manipulationCT Models:• A_Aorta• A_Aorta_Asc• A_Aorta_Dsc• A_Brachiocephls• A_Carotid_L• A_Carotid_R• A_Coronary• A_LAD• A_Pulmonary• A_Subclavian_L• A_Subclavian_R• Atrium_L• Atrium_R• Bladder• Bladder_F• Bone_Hyoid• Bone_Ilium_L• Bone_Ilium_R• Bone_Mandible• Bone_Pelvic• Bone_Skull• Bone_Sternum• Bone_Teeth• Bowel• Bowel_Bag• Bowel_Large• Bowel_Small• BrachialPlex_L• BrachialPlex_R• Brain• Brainstem• Breast_L• Breast_R• Breast_Prone• Bronchus• BuccalMucosa | CT or MR input for contouring of anatomical regions: Head and Neck, Thorax, Abdomen and Pelvis.CT Models• A_Aorta• A_Aorta_Base• A_Aorta_I• A_Aorta_l• A_Celiac• A_LAD• A_Mesenteric_S• A_Pulmonary• Atrium_L• Atrium_R• Bag_Bowel• Bag_Bowel_Extend• Bag_Bowel_Full• Bag_Bowel_S• Bladder• Bladder_CBCT• Bladder_HDR• Body• Body+Mask• Bone_Hyoid• Bone_Hyoid• Bone_Ilium_L• Bone_Ilium• Bone_Ilium_L• Bone_Ilium_R• Bone_Ischium_L• Bone_Ischium_R• Bone_Mandible• Bone_Pelvic• BoneMarrow_Pelvic• Bowel• Bowel_Bag• Bowel_Bag_Extend• Bowel_Bag_Full• Bowel_Bag_Superior• Bowel_Extend• Bowel_Full• Bowel_HDR• Bowel_S• Bowel_Superior• BrachialPlex_L• BrachialPlex_R• BrachialPlexs• Brain• Brainstem• Breast_Implant_L• Breast_Implant_R• Breast_L |

Page 9

[Continuing with extensive lists of anatomical structures for each model - the content shows detailed comparisons of CT and MR models across the three systems, with hundreds of anatomical structure names listed]

Page 10

[Continuation of anatomical structure lists]

Page 11

[Continuation of anatomical structure lists]

Page 12

[Continuation of anatomical structure lists]

Page 13

[Continuation of anatomical structure lists and system specifications]

| Design: Region/volume of interest measurements and size measurements | None – not applicable | None – not applicable | None – not applicable |
| Design: Region/Volume Quantification | None – not applicable | None – not applicable | None – not applicable |
| Design: Supported modalities | CT or MR input for contouring or registration/fusion. PET/CT input for registration/fusion only. DICOM RTSTRUCT and REGISTRATION for input | CT or MR input for contouring or registration/fusion. PET/CT input for registration/fusion only. DICOM RTSTRUCT and REGISTRATION for input | CT or MR input for contouring. |
| Design: Reporting and data routing | No built-in reporting, supports exporting DICOM RTSTRUCT, REGISTRATION and DOSE files for output. | No built-in reporting, supports exporting DICOM RTSTRUCT, REGISTRATION and DOSE files for output. | No built-in reporting, supports exporting DICOM RTSTRUCT files for output |
| Compatibility with the environment and other devices | Compatible with data from any DICOM compliant scanners for the applicable modalities. Agent Uploader component compatible with Microsoft Windows. Cloud-based automatic contouring service compatible with Linux. | Compatible with data from any DICOM compliant scanners for the applicable modalities. Agent Uploader component compatible with Microsoft Windows. Cloud-based automatic contouring service compatible with Linux. | No Limitation on scanner model, DICOM 3.0 compliance required. |

Page 14

| Web application server-based application compatible with Linux. | Web application server-based application compatible with Linux. | |
| Communications/ Networking | TCP/IP | TCP/IP | N/A |
| Computer platform & Operating System | Windows Operating System | Windows Operating System | Operating System Windows 10 / Windows Server 2016 and Above |

7. Performance

Verification testing confirmed that RADAC V5 features and functionality met predefined acceptance criteria consistent with the predicate device. RADAC models were divided into 4 categories and tested using protocols consistent with the category, while applying consistent acceptance criteria across each. Models were classified as 1) New Models that did not exist in the predicate or reference device, 2) Models that did not exist in the predicate, but were unchanged from the reference device, 3) Models that previously existed in the predicate or reference device but were changed in RADAC V5, and 4) Models that previously existed in the predicate device (AC V4) and are unchanged.

Model Verification Categories and Acceptance Criteria

*, **, ***, and **** refer to specified acceptance criteria.

New Models - Not previously in AC or LCModels unchanged from LC 1.8Updated Models Regardless of source (AC or LC)Unchanged Models From AC 2.6
New Structure Model Validation:• Training DSC Evaluation*• External Dataset DSC Evaluation*• Internal Likert Qualitative Review**• External Likert Qualitative Review**Existing Structure Model Tests:• DSC Comparison between Limbus v1.8 and AutoContour v2.7.*• Hausdorff Distance between Limbus v1.8 and Autocontour v2.7.**Existing Structure Model Tests:• DSC Comparison between Limbus v1.8 and AutoContour v2.7 or between AutoContour v2.6 and AutoContour v2.7.• Hausdorff Distance between Limbus v1.8 and AutoContour v2.7 or between AutoContour v2.6 and AutoContour v2.7.**New Structure Model Validation:• Training DSC Evaluation• External Dataset DSC Evaluation*• Internal Likert Qualitative Review**• External Likert Qualitative Review**Existing Structure Model Tests:• DSC Comparison between AutoContour v2.6 and AutoContour v2.7.*• Hausdorff Distance between AutoContour v2.6 and AutoContour v2.7.**

Page 15

Model Acceptance testing utilized the following predefined acceptance criteria.

  • Training DSC and External Dataset DSC Evaluations: Mean DSC threshold for passing Large Structures: 0.80, Medium Structures: 0.65, and Small Structures: 0.50

** Internal and External Likert Qualitative Review: Each structure model was determined to pass if the average grade exceeded 3 across all external image sets reviewed.

*** Existing Structure Model DSC Evaluation: The DSC threshold for passing Large Structures: >0.99, Medium Structures >0.98, and Small Structures: >0.95.

**** Existing Structure Model Hausdorff Distance Evaluation: Hausdorff Distance threshold for passing structures is ≤ 3mm.

Performance Data

Testing was performed according to Radformation's AutoContour Validation Test Protocol and Report, which demonstrates that AutoContour Model RADAC V5 performs as intended per its indications for use. Further tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model.

Description of Changes to Test Protocol

Changes to the test protocol include an update to the existing structure model tests to include Hausdorff Distance as an additional metric for regression testing of existing model output changes, and the inclusion of a comparison to Limbus v1.8. There were no changes to the new and updated structure model testing protocol between AutoContour RADAC V4 and RADAC V5.

Testing Summary

Mean Dice Similarity Coefficient (DSC) was used to validate the accuracy of structure model outputs when tested on image data sequestered from the original training data population. The test datasets were independent from those used for training and consisted of approximately 10% of the number of training image sets used as input for the model. For CT structure models there were an average of 459 training and 49 testing image sets. CT training images were gathered from several institutions, in several different countries.

Ground truthing of each test data set were generated manually using consensus (NRG/RTOG/ESTRO) guidelines as appropriate by six clinically experienced experts consisting of 2 radiation therapy physicists, 1 radiation dosimetrist, and 3 radiation therapists with specialized training in radiation therapy contouring.

Structure models were categorized into three size categories as DSC metrics can be sensitive to structure volume. A structure would pass initial validation if the mean DSC exceeded 0.8 for Large volume structures (eg. Bladder, Spleen) 0.65 for Medium volume structures (eg. Gallbladder, Duodenum) and 0.5 for Small structures (eg. Cornea, Retina). For CT Structure models large, medium and small structures resulted in a mean DSC of 0.91+/-0.14, 0.86+/-0.13, and 0.75+/-0.20 respectively. A full summary of the CT structure DSC is available below:

Table 4: CT Training Data Results for AutoContour Model RADAC V5

Page 16

CT StructureSizePass Criteria# of Training Sets# of Testing SetsDSC (Avg)DSC Std DevLower Bound 95% Confidence Interval
A_Aorta** (Update)Large0.80N/AN/AN/AN/AN/A
A_Aorta_Asc** (Update)Medium0.65240600.920.020.91
A_Carotid_L** (Update)Medium0.65328830.790.130.58
A_Carotid_R** (Update)Medium0.65328830.790.130.58
A_CeliacSmall0.50435440.870.240.47
A_Circumflex_LSmall0.50415450.520.36-0.08
A_Coronary_2d_RSmall0.50415450.830.260.41
A_Coronary_LSmall0.50415450.500.36-0.11
A_Coronary_R** (Update)Small0.504081030.560.090.41
A_Mesenteric_SSmall0.50428430.790.280.33
A_Pulmonary** (Update)Medium0.651338500.910.180.60
Atrium_L**(Update)Medium0.65398450.820.190.51
Atrium_R** (Update)Medium0.65398450.820.200.50
AV_NodeMedium0.65398450.870.200.53
BarrigelMedium0.65111280.770.060.66
BileDuct_CommonSmall0.506431620.570.200.24
Bladder** (Update)Large0.801105500.930.180.63
Body+Mask*Large0.80N/AN/AN/AN/AN/A
Bone_Ischium_LLarge0.80521500.920.150.68
Bone_Ischium_RLarge0.803540.920.150.68
Bone_Mandible** (Update)Medium0.65234250.880.160.62
Bone_Pelvic** (Update)Large0.80234250.940.120.74
Bone_Pterygoid_LSmall0.50308360.730.290.26
Bone_Pterygoid_RSmall0.50308360.730.290.26
Bone_Skull** (Update)Large0.8080200.920.010.90
Bone_Teeth** (Update)Medium0.65340760.880.020.84
Bowel** (Update)Medium0.65221130.780.030.72
Bowel_Bag** (Update)Large0.80454480.950.050.87
Bowel_Bag_F*Large0.80N/AN/AN/AN/AN/A
Bowel_F*Medium0.65N/AN/AN/AN/AN/A
Bowel_Large** (Update)Medium0.65805520.890.170.61
Bowel_Large_F*Medium0.65N/AN/AN/AN/AN/A
Bowel_Small** (Update)Medium0.65705450.930.050.85

Page 17

[Continuation of the CT Training Data Results table with additional structures and their corresponding metrics]

Page 18

[Continuation of the CT Training Data Results table]

Page 19

[Continuation of the CT Training Data Results table]

Page 20

[Continuation of the CT Training Data Results table]

Page 21

[Continuation of the CT Training Data Results table]

Page 22

Table 5: CT External Clinical Dataset References

Model GroupData Source IDData Citation
CT PelvisTCIA - Pelvic-RefAfua A. Yorke, Gary C. McDonald, David Solis Jr., Thomas Guerrero. (2019) Pelvic Reference Data. The Cancer Imaging Archive. DOI: 10.7937/TCIA.2019.woskq5oo
CT Head and NeckTCIA - Head-Neck-PET-CTMartin Vallières, Emily Kay-Rivest, Léo Jean Perrin, Xavier Liem, Christophe Furstoss, Nader Khaouam, Phuc Félix Nguyen-Tan, Chang-Shu Wang, Khalil Sultanem. (2017). Data from Head-Neck-PET-CT. The Cancer Imaging Archive. doi: 10.7937/K9/TCIA.2017.8oje5q00
CT AbdomenTCIA - Pancreas-CT-CBHong, J., Reyngold, M., Crane, C., Cuaron, J., Hajj, C., Mann, J., Zinovoy, M., Yorke, E., LoCastro, E., Apte, A. P., & Mageras, G. (2021). Breath-hold CT and cone-beam CT images with expert manual organ-at-risk segmentations from radiation treatments of locally advanced pancreatic cancer [Data set]. The Cancer Imaging Archive. https://doi.org/10.7937/TCIA.ESHQ-4D90
CT Thorax:TCIA - NSCLCAerts, H. J. W. L., Wee, L., Rios Velazquez, E., Leijenaar, R. T. H., Parmar, C., Grossmann, P., Carvalho, S., Bussink, J., Monshouwer, R., Haibe-Kains, B., Rietveld, D., Hoebers, F., Rietbergen, M. M., Leemans, C. R., Dekker, A., Quackenbush, J., Gillies, R. J., Lambin, P. (2019). Data From

Page 23

[Continuation of external dataset references and additional testing information]

Table 6: CT External Reviewer Results for AutoContour Model RADAC V5

CT StructureSizePass Criteria# Testings SetsAverage DSCAverage DSC Std. DevLower Bound 95% Confidence IntervalExternal Reviewer Average Rating (1-5)
A_Aorta (Update)Large0.80200.940.020.914.80
A_Aorta_Asc (Update)Medium0.65200.910.030.864.40
A_Carotid_L (Update)Medium0.65370.790.060.684.30
A_Carotid_R (Update)Medium0.65370.780.050.694.20
A_CeliacSmall0.50240.780.220.414.40
A_Circumflex_LSmall0.50400.560.160.304.25

Page 24

[Continuation of CT External Reviewer Results table]

Page 25

[Continuation of CT External Reviewer Results table]

Page 26

*N/A: Structures are generated based on a post-processing/boolean operation from previously released structure models (Eye, Rib) rather than generated from a CNN model. Quantitative and Qualitative testing for these structures is still performed in the following sections in order to validate appropriate contour generation and clinical acceptability.

** A_Aorta, A_Aorta_Asc, A_Carotid_L, A_Carotid_R, A_Coronary_R, A_Pulmonary, Atrium_L, Atrium_R, Bladder, Bone_Skull, Bone_Teeth, Bone_Mandible, Bone_Pelvic, Bowel, Bowel_Bag, Bowel_Large, Bowel_Small, BrachialPlex_L, BrachialPlex_R, Brain, Brainstem, BuccalMucosa, CaudaEquina, Cavity_Oral_Ext, Cochlea_L, Cochlea_R, Dental_Artifact, Ear_Internal_L, Ear_Internal_R, Esophagus, Heart, Hippocampus_L, Hippocampus_R, Iliac_Int_L, Iliac_Int_R, Iliac_L, Iliac_R, Kidneys, Larynx, LN_Pelvics_NRG, LN_Presacral, Lobe_Temporal_L, Lobe_Temporal_R, Musc_Iliopsoas_L, Musc_Iliopsoas_R, Myocardium, Parotid_L, Parotid_R, Pancreas, Pericardium, Pharynx, Pituitary, ProstateBed, Rib01_L, Rib01_R, Rib02_L, Rin02_R, Rib03_L, Rib03_R, Rib04_L, Rib04_R, Rib05_L, Rib05_R, Rib06_L, Rib06_R, Rib07_L, Rib07_R, Rib08_L, Rib08_R, Rib09_L, Rib09_R, Rib10_L, Rib10_R, Rib11_L, Rib11_R, Rib12_L, Rib12_R, Rib, RIb_R, SacralPlex_L, SacralPlex_R, SpinalCanal, SpinalCord, Spleen, Trachea, UteroCervix, V_Jugular_L, V_Jugular_R, V_Venacava_I, V_Venacava_S, VB, VB_C1, VB_C2, VB_C3, VB_C4, VB_C5, VB_C6, VB_C7, VB_L1, VB_L2, VB_L3, VB_L4, VB_L5, VB_T01, VB_T02, VB_T03, VB_T04, VB_T05, VB_T06, VB_T07, VB_T08, VB_T09, VB_T10, VB_T11, VB_T12, Ventricle_L, and Ventricle_R models were previously released/tested, but updates to the training datasets and contour output necessitated additional release testing for these models.

Additional external clinical testing was performed in order to validate the accuracy of the models on image sets acquired that were unique to the training datasets. Both AutoContour and manually added ground truth contours following the same structure guidelines used for structure model training were added to the image sets.

Page 27

[Continuation of external testing data and references]

Page 28

[Continuation of external testing results]

Page 29

The MR training data set used for initial testing of the Brain models (Amygdala_L/R, CorpusCallosum, Cornea_L/R, Falx, Retina_L/R, Tentorium, Sinuses, Thalamus and Ventricle_Brain) had an average of 259 training image sets and 58 testing image sets and were acquired from several different institutions in several countries.

The MR training data used for initial testing of the MR Pelvis models (Rectal_Spacer) had an average of 243 training image sets and 50 testing image sets and were taken from 2 open source datasets, and several institutions within several countries, such as the United States,

Page 30

Canada, and Spain. Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing.

Ground truthing of each test data set was generated manually using consensus (NRG/RTOG) guidelines as appropriate by six clinically experienced experts consisting of 2 radiation therapy physicists, 1 radiation dosimetrist, and 3 radiation therapists with specialized training in radiation therapy contouring. For MR Structure models, a mean training DSC of 0.82+/-0.12 for medium models, and 0.72+/- 0.09 for small models.

Table 8: MR Training Data Results for AutoContour Model RADAC V5

MR ModelsSizePass Criteria# of Training Sets# of Testing SetsDSC (Avg)DSC Std Dev (Avg)Lower Bound 95% Confidence Interval
A_Pud_Int_L** (Update)Small0.50221560.690.080.56
A_Pud_Int_R** (Update)Small0.50221560.690.080.56
Amygdala_LSmall0.504931330.730.070.61
Amygdala_RSmall0.504931330.730.070.61
Bladder_Trigone** (Update)Medium0.65256650.720.050.64
Brainstem** (Update)Medium0.65422500.880.160.62
CorpusCallosumMedium0.6556150.720.250.31
FalxMedium0.65178450.830.030.78
HDR_BowelMedium0.65307370.830.190.51
HDR_Colon_SigmoidMedium0.65240240.810.210.46
Lens_L** (Update)Small0.50208520.810.090.66
Lens_R** (Update)Small0.50208520.810.090.66
Medulla*Medium0.65N/AN/AN/AN/AN/A
Midbrain*Medium0.65N/AN/AN/AN/AN/A
NVB_L** (Update)Small0.50167430.610.080.48
NVB_R** (Update)Small0.50167430.610.080.48
OpticNrv_L** (Update)Small0.50164320.640.080.51
OpticNrv_R** (Update)Small0.50164320.640.080.51
OpticTract_L** (Update)Small0.50304760.720.080.59
OpticTract_R** (Update)Small0.50304760.720.080.59
PenileBulb_TRUFISmall0.50228580.790.120.59
Pons *Medium0.65N/AN/AN/AN/AN/A
Rectal_Spacer** (Update)Small0.50233260.820.240.43
SinusesMedium0.65178450.830.040.762

Page 31

| Tentorium | Small | 0.50 | 180 | 46 | 0.77 | 0.05 | 0.69 |
| Thalamus | Medium | 0.65 | 361 | 40 | 0.83 | 0.13 | 0.62 |
| Urethra** (Update) | Small | 0.50 | 390 | 100 | 0.68 | 0.09 | 0.53 |
| Ventricle_Brain | Medium | 0.65 | 182 | 47 | 0.90 | 0.04 | 0.8342 |

*These structures were generated from post-processing operations from previously released or re-tested models, Brainstem, rather than from a CNN model. Qualitative and Quantitative analysis of these structures is still performed.

**A_Pud_Int_L, A_Pud_Int_R, Bladder_Trigone, Brainstem, Lens_L, Lens_R, NVB_L, NVB_R, OpticNrv_L, OpticNrv_R, OpticTract_L, OpticTract_R, Rectal_Spacer, and Urethra models were previously released/tested, but updates to the training datasets and contour output necessitated additional release testing for these models.

Additional external clinical testing was performed in order to validate the accuracy of the models on image sets acquired that were unique to the training datasets.

Table 9: MR External Clinical Dataset References

Model GroupData Source IDData Citation
MR BrainMR - RenownN/A
MR PelvisGold Atlas PelvisNyholm, Tufve, Stina Svensson, Sebastian Andersson, Joakim Jonsson, Maja Sohlin, Christian Gustafsson, Elisabeth Kjellén, et al. 2018. "MR and CT Data with Multi Observer Delineations of Organs in the Pelvic Area - Part of the Gold Atlas Project." Medical Physics 12 (10): 3218–21. doi:10.1002/mp.12748.
MR Pelvis_2SynthRadThummerer A, van der Bijl E, Galapon Jr A, Verhoeff JJ, Langendijk JA, Both S, van den Berg CAT, Maspero M. 2023. SynthRAD2023 Grand Challenge dataset: Generating synthetic CT for radiotherapy. Medical Physics, 50(7), 4664-4674. https://doi.org/10.1002/mp.16529
MRLinac PelvisMR LinacN/A- Testing data was shared by 2 institutions utilizing MR Linacs for image acquisitions.
MR Female HDR BrachyFemale HDR MR PelvisN/A- Testing data was shared by 1 institution in Canada
MR Pelvis BarrigelBarrigelN/A- Testing data was shared by several institutions in Australia.

For the Brain models, datasets acquired via data-use agreement from a clinical partner were acquired containing 20 MR T1 Ax post (BRAVO) image scans acquired with a GE MR750w scanner. Images had an average slice thickness of 1.6mm, In-plane resolution between 0.94 mm, and acquisition parameters of TR=5.98ms, TE=96.8s. Data for testing of the MR Pelvis structure models were acquired from 2 publicly available datasets, which contained images of patients with prostate or rectal cancer, as well as 1 dataset shared from 2 institutions utilizing an MR Linac. Various scanner models and acquisition settings were used. Data for testing of the MR Pelvis HDR structure models were acquired from 1 institution using two different slice thicknesses, 1mm and 4mm, and two different in-plane resolutions, 1mm and .72mm.

DSC values were calculated between ground truth contour data and AutoContour structures and rated on the same DSC passing criteria as was used for the training DSC validation. All structures, but one passed the minimum DSC criteria for small, and medium structures with a

Page 32

mean DSC of 0.71+/-0.13, and 0.78+/-0.09 respectively. Additionally, the qualitative clinical appropriateness of AutoContour structures generated on these scans was graded by clinical experts. Autocontour structures were graded on a scale from 1 to 5 where 5 refers to contour requiring no additional edits, and 1 refers to a score in which full manual re-contour of the structure would be required. An average score >= 3 was used to determine whether a structure model would ultimately be beneficial clinically. An average rating of 4.3 was found across all MR structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use. The single structure that did not pass the minimum DSC criteria was also evaluated by clinical experts and scored an average of 3.90, which demonstrates the clinical effectiveness of this model.

Table 10: MR External Reviewer Results for AutoContour Model RADAC V5

MR ModelsSizePass Criteria# External Test Data SetsAverage DSCAverage DSC Std. DevLower Bound 95% Confidence IntervalExternal Reviewer Average Rating (1-5)
A_Pud_Int_L (Update)Small0.50390.570.110.394.30
A_Pud_Int_R (Update)Small0.50390.590.070.484.40
Amygdala_LSmall0.50200.650.170.374.30
Amygdala_RSmall0.50190.660.170.484.10
Bladder_Trigone (Update)Medium0.65450.560.170.284.20
Brainstem (Update)Medium0.65200.930.020.904.80
CorpusCallosumMedium0.65200.760.080.634.80
FalxMedium0.65200.790.090.644.50
HDR_BowelMedium0.6570.500.140.263.90
HDR_Colon_SigmoidMedium0.6580.800.120.604.40
Lens_L (Update)Small0.50180.740.130.534.80
Lens_R (Update)Small0.50190.680.160.414.80
MedullaMedium0.65190.820.090.674.40
MidbrainMedium0.65200.840.060.744.40
NVB_L (Update)Small0.5060.810.090.664.20
NVB_R (Update)Small0.5060.830.040.764.10
OpticNrv_L (Update)Small0.50200.750.170.474.20
OpticNrv_R (Update)Small0.50200.700.170.424.40
OpticTract_L (Update)Small0.50200.780.180.494.20
OpticTract_R (Update)Small0.50200.800.160.544.10
PenileBulb_TRUFISmall0.50390.740.090.594.40
PonsMedium0.65200.900.040.834.40

Page 33

| Rectal_Spacer (Update) | Small | 0.50 | 18 | 0.80 | 0.09 | 0.65 | 4 |
| Sinuses | Medium | 0.65 | 20 | 0.79 | 0.18 | 0.49 | 4.30 |
| Tentorium | Small | 0.50 | 20 | 0.67 | 0.18 | 0.37 | 4.30 |
| Thalamus | Medium | 0.65 | 20 | 0.81 | 0.08 | 0.68 | 4.50 |
| Urethra (Update) | Small | 0.50 | 39 | 0.55 | 0.13 | 0.33 | 4.50 |
| Ventricle_Brain | Medium | 0.65 | 20 | 0.91 | 0.03 | 0.86 | 4.30 |

Validation testing of the AutoContour application demonstrated that the software meets user needs and intended uses of the application.

8. Conclusion

AutoContour RADAC V5 is deemed substantially equivalent to the primary Predicate Device AutoContour RADAC V4 (K242729). Verification tests were performed to ensure that the software works as intended, and pass/fail criteria were used to verify requirements. Validation testing was performed to ensure that the software was behaving as intended, and output results from AutoContour were validated against accepted results for known planning parameters from clinically-utilized treatment planning systems. All tests passed regression testing. Verification and validation testing, and the risk documentation demonstrate that AutoContour is as safe and effective as the Predicate Device. The minor technological differences between AutoContour Model RADAC V5 and the Predicate Device do not raise any significant questions on the safety and effectiveness of the Subject Device.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).