K Number
K230685
Device Name
AutoContour Model RADAC V3
Manufacturer
Date Cleared
2023-04-14

(32 days)

Product Code
Regulation Number
892.2050
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP Authorized
Intended Use
AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.
Device Description
As with AutoContour Model RADAC V2, the AutoContour Model RADAC V3 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system. AutoContour Model RADAC V3 consists of 3 main components: - 1. A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set. - 2. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured. - 3. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.
More Information

Not Found

Yes
The device description explicitly states that the software uses "machine learning based contouring" and that the "deep-learning based structure models are trained using imaging datasets".

No
The device aids in radiation treatment planning by contouring structures, but it does not directly treat or diagnose a disease.

No

The device is intended to assist radiation treatment planners in contouring structures for radiation therapy treatment planning. It does not provide a diagnosis of a patient's medical condition. It performs segmentation (contouring) of existing medical images.

Yes

The device description explicitly states that the device is "software" and details its components as a .NET client application, a local "agent" service, and a cloud-based automatic contouring service, all of which are software-based. There is no mention of accompanying hardware that is part of the medical device itself.

Based on the provided information, this device is not an IVD (In Vitro Diagnostic).

Here's why:

  • Intended Use: The intended use is to "assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning." This is a function related to medical imaging analysis and treatment planning, not the diagnosis of disease through in vitro examination of specimens.
  • Device Description: The device processes medical images (CT and MR) to generate contours. It does not analyze biological specimens like blood, urine, or tissue samples.
  • Lack of IVD Characteristics: The description does not mention any of the typical characteristics of an IVD, such as:
    • Analyzing biological samples.
    • Detecting or measuring substances in biological samples.
    • Providing information for the diagnosis, monitoring, or prognosis of a disease based on in vitro testing.

The device is clearly focused on image processing and analysis for the purpose of radiation therapy planning, which falls under the category of medical imaging software or treatment planning software, not in vitro diagnostics.

No
The letter does not state that the FDA has reviewed and approved or cleared a Predetermined Change Control Plan (PCCP) for this specific device.

Intended Use / Indications for Use

AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.

Product codes

QKB

Device Description

As with AutoContour Model RADAC V2, the AutoContour Model RADAC V3 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system.

AutoContour Model RADAC V3 consists of 3 main components:

    1. A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
    1. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
    1. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.

Mentions image processing

Yes

Mentions AI, DNN, or ML

Yes

Input Imaging Modality

CT or MR input for contouring or registration/fusion. PET/CT input for registration/fusion only.

Anatomical Site

Head and Neck, Thorax, Abdomen and Pelvis.

Indicated Patient Age Range

Patient ages range 11-30 : 0.3%, 31-50 : 6.2%, 51-70 : 43.3%, 71-100 : 50.3%.

Intended User / Care Setting

radiation treatment planners

Description of the training set, sample size, data source, and annotation protocol

For CT structure models there were an average of 373 training image sets. Among the patients used for CT training and testing 51.7% were male and 48.3% female. Patient ages range 11-30 : 0.3%, 31-50 : 6.2%, 51-70 : 43.3%, 71-100 : 50.3%. Race 84.0% White, 12.8% Black or African American, 3.2% Other. CT datasets spanned across treatment subgroups most typically found in a radiation therapy treatment clinic with the most common diagnosis being cancers of the Prostate (21%), Breast (21%), Lung (29%), Head and Neck (16%), Other (13%). Images were acquired using a Philips Big Bore CT simulator with the majority of scans having an average slice thickness of 2mm, In-plane resolution between 1-1.2 mm, and acquisition parameters of 120kVp, 674+/-329 average mAs.

Ground truthing of each test data set were generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.

The MR training data set used for initial testing of the Brain models (Cerebellum, Hypothalamus, Hypo_True, OpticTract_L, OpticTract_R, Pituitary, Eye_L, Eye_R) had an average of 274 training image sets and were acquired from the Cancer Imaging Archive GLIS-RT dataset. These data sets consisted primarily of glioblastoma and astrocytoma patients. Images were acquired on either a GE Signa HDxT (3T) or Siemens Skyra (3T) scanner and had an average slice thickness of 1mm, In-plane resolution between 0.5-1.0 mm, and acquisition parameters of TR=2.3-8.9ms, TE=3.2s.

The MR training data used for initial testing of the MR Pelvis models (Prostate, Glnd_Prostate, and SeminalVes) was taken from the Cancer Imaging Archive Prostate-MRI-US-Biopsy dataset which consisted patients with a of suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings. The images used were T2-Axial image sets acquired on a 3T Siemens Skyra scanner. The majority of pulse sequences used are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing.

Description of the test set, sample size, data source, and annotation protocol

Mean Dice Similarity Coefficient (DSC) was used to validate the accuracy of structure model outputs when tested on image data sequestered from the original training data population.The test datasets were independent from those used for training and consisted of approximately 10% of the number of training image sets used as input for the model. For CT structure models there were an average of 50 testing image sets. Among the patients used for CT training and testing 51.7% were male and 48.3% female. Patient ages range 11-30 : 0.3%, 31-50 : 6.2%, 51-70 : 43.3%, 71-100 : 50.3%. Race 84.0% White, 12.8% Black or African American, 3.2% Other. CT datasets spanned across treatment subgroups most typically found in a radiation therapy treatment clinic with the most common diagnosis being cancers of the Prostate (21%), Breast (21%), Lung (29%), Head and Neck (16%), Other (13%). Images were acquired using a Philips Big Bore CT simulator with the majority of scans having an average slice thickness of 2mm, In-plane resolution between 1-1.2 mm, and acquisition parameters of 120kVp, 674+/-329 average mAs.

Ground truthing of each test data set were generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.

Additional external clinical testing was performed in order to validate the accuracy of the models on image sets acquired that were unique to the training datasets. Publically available CT datasets from The Cancer Imaging Archive (TCIA archive) were used and both AutoContour and manually added ground truth contours following the same structure guidelines used for structure model training were added to the image sets.

The MR training data set used for initial testing of the Brain models (Cerebellum, Hypothalamus, Hypo_True, OpticTract_L, OpticTract_R, Pituitary, Eye_L, Eye_R) had an average of 92 testing image sets and were acquired from the Cancer Imaging Archive GLIS-RT dataset. These data sets consisted primarily of glioblastoma and astrocytoma patients. Images were acquired on either a GE Signa HDxT (3T) or Siemens Skyra (3T) scanner and had an average slice thickness of 1mm, In-plane resolution between 0.5-1.0 mm, and acquisition parameters of TR=2.3-8.9ms, TE=3.2s.

Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing.

Ground truthing of each test data set was generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.

For the Brain models, datasets acquired via data-use agreement from a clinical partner were acquired containing 20 MR T1 Ax post (BRAVO) image scans acquired with a GE MR750w scanner. Images had an average slice thickness of 1.6mm, In-plane resolution between 0.94 mm, and acquisition parameters of TR=5.98ms, TE=96.8ms. Data for testing of the MR Pelvis structure models were acquired from a publicly available Gold Atlas Data set which contained 19 images of patients with prostate or rectal cancer. lmages were acquired on a GE DISCOVERY MR750w (Ax T2 FRFSE) with in-plane resolution of 0.9mm, Slice thickness of 2.5mm, TR=5.988s and TE=96.8ms.

Summary of Performance Studies (study type, sample size, AUC, MRMC, standalone performance, key results)

Non-clinical tests were performed according to Radformation's AutoContour Complete Test Protocol and Report.
Mean Dice Similarity Coefficient (DSC) was used to validate the accuracy of structure model outputs.
For CT Structure models large, medium and small structures resulted in a mean DSC of 0.88+/-0.06, 0.88+/-0.08, and 0.75+/-0.12 respectively.
DSC values between ground truth contour data and AutoContour structures for external clinical datasets: All structures passed the minimum DSC criteria for small, medium and large structures with an mean DSC of 0.79+/-0.11, 0.83+/-0.12, and 0.90+/-0.09 respectively.
Additionally, the qualitative clinical appropriateness of AutoContour structures generated on these scans was graded by clinical experts. An average rating of 4.5 was found across all CT structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use.

For MR Structure models, a mean training DSC of 0.87+/-0.07 was found for medium models and 0.74+/-0.07 for small models.
DSC values between ground truth contour data and AutoContour structures for external clinical datasets: All structures passed the minimum DSC criteria for small and medium structures with a mean DSC of 0.74+/-0.07, 0.87+/-0.07 respectively.
Additionally, the qualitative clinical appropriateness of AutoContour structures generated on these scans was graded by clinical experts. An average rating of 4.4 was found across all MR structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use.

Key Metrics (Sensitivity, Specificity, PPV, NPV, etc.)

Dice Similarity Coefficient (DSC), Average Rating (1-5)

Predicate Device(s): If the device was cleared using the 510(k) pathway, identify the Predicate Device(s) K/DEN number used to claim substantial equivalence and list them here in a comma separated list exactly as they appear in the text. List the primary predicate first in the list.

AutoContour Model RADAC V2 (K220598)

Reference Device(s): Identify the Reference Device(s) K/DEN number and list them here in a comma separated list exactly as they appear in the text.

Not Found

Predetermined Change Control Plan (PCCP) - All Relevant Information for the subject device only (e.g. presence / absence, what scope was granted / cleared under the PCCP, any restrictions, etc).

Not Found

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).

0

Image /page/0/Picture/0 description: The image shows the logo of the U.S. Food and Drug Administration (FDA). On the left is the Department of Health & Human Services logo. To the right of that is the FDA logo, with the letters "FDA" in a blue square. To the right of the square is the text "U.S. FOOD & DRUG ADMINISTRATION" in blue.

April 14, 2023

Radformation, Inc. % Kurt Sysock Co-founder/CEO 335 Madison Avenue, 4th floor NEW YORK NY 10017

Re: K230685

Trade/Device Name: AutoContour Model RADAC V3 Regulation Number: 21 CFR 892.2050 Regulation Name: Medical Image Management And Processing System Regulatory Class: Class II Product Code: QKB Dated: March 9, 2023 Received: March 13, 2023

Dear Kurt Sysock:

We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.

If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.

Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part

1

801); medical device reporting of medical device-related adverse events) (21 CFR 803) for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reportingcombination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4. Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.

Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.

For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatoryassistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).

Sincerelv.

Image /page/1/Picture/6 description: The image shows a digital signature. The signature is for Lora D. Weidner. The date of the signature is 2023.04.14, and the time is 10:50:08 -04'00'.

Lora D. Weidner, Ph.D. Assistant Director Radiation Therapy Team DHT8C: Division of Radiological Imaging and Radiation Therapy Devices OHT8: Office of Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health

Enclosure

2

Indications for Use

510(k) Number (if known) K230685

Device Name AutoContour Model RADAC V3

Indications for Use (Describe)

AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.

Type of Use (Select one or both, as applicable)
---------------------------------------------------

X Prescription Use (Part 21 CFR 801 Subpart D)

| Over-The-Counter Use (21 CFR 801 Subpart C)

CONTINUE ON A SEPARATE PAGE IF NEEDED.

This section applies only to requirements of the Paperwork Reduction Act of 1995.

DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.

The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:

Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov

"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."

3

K230685

AutoContour Software

Radformation, Inc.

Special 510(k) Summary

4

Table of Contents

5.1. Submitter's Information3
5.2. Device Information3
5.3. Predicate Device Information4
5.4. Device Description4
5.5. Indications for Use4
5.6. Technological Characteristics4
5.7. Discussion of differences10
5.9. Conclusion25

5

This 510(k) Summary has been created per the requirements of the Safe Medical Device Act (SMDA) of 1990, and the content is provided in conformance with 21 CFR Part 807.92.

5.1. Submitter's Information

Table 1 : Submitter's Information
Submitter's Name:Kurt Sysock
Company:Radformation, Inc.
Address:335 Madison Avenue, 4th Floor
New York, NY 10017
Contact Person:Alan Nelson
Chief Science Officer, Radformation
Phone:518-888-5727
Fax:----------
Email:anelson@radformation.com
Date of Summary Preparation03/09/2023

5.2. Device Information

Table 2 : Device Information
Trade Name:AutoContour Model RADAC V3
Common Name:AutoContour, AutoContouring, AutoContour Agent,
AutoContour Cloud Server
Classification Name:Class II
Classification:Medical image management and processing system
Regulation Number:892.2050
Product Code:QKB
Classification Panel:Radiology

6

5.3. Predicate Device Information

AutoContour Model RADAC V3 (Subject Device) makes use of its prior submissions -AutoContour Model RADAC V2 (K220598) - as the Predicate Device.

5.4. Device Description

As with AutoContour Model RADAC V2, the AutoContour Model RADAC V3 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system.

AutoContour Model RADAC V3 consists of 3 main components:

    1. A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
    1. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
    1. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.

5.5. Indications for Use

AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.

5.6. Technological Characteristics

The Subject Device, AutoContour Model RADAC V3 makes use of AutoContour Model RADAC V2 (K220598) as the Predicate Device for substantial equivalence comparison. The functionality and technical components of this prior submission remain unchanged in AutoContour Model RADAC V3. This submission is intended to build on the technological characteristics of the 510(k) cleared AutoContour Model RADAC V2 pertaining to new structure models for both CT and MRI.

7

5.6.1. Updates vs. AutoContour (K220598)

The updated submission expands the use of machine-learning based contouring to include additional organs and volumes of Interest found in MR and CT image types.

| Table 3: Technological Characteristics

AutoContour Model RADAC V3 vs. AutoContour Model RADAC V2 (K220598)
CharacteristicSubject Device: AutoContour Model
RADAC V3Predicate Device: AutoContour Model
RADAC V2 (K220598)
Indications for
UseAutoContour is intended to assist radiation
treatment planners in contouring and
reviewing structures within medical
images in preparation for radiation therapy
treatment planning.AutoContour is intended to assist radiation
treatment planners in contouring and
reviewing structures within medical images
in preparation for radiation therapy
treatment planning.
Design: Image
registrationManual and Automatic Rigid registration.
Automatic Deformable RegistrationManual and Automatic Rigid registration.
Automatic Deformable Registration
Design:
Supported
modalitiesCT or MR input for contouring or
registration/fusion.
PET/CT input for registration/fusion only.
DICOM RTSTRUCT for outputCT or MR input for contouring or
registration/fusion.
PET/CT input for registration/fusion only.
DICOM RTSTRUCT for output

8

| Regions and
Volumes of
interest (ROI) | CT or MR input for contouring of
anatomical regions: Head and Neck,
Thorax, Abdomen and Pelvis. | CT or MR input for contouring of
anatomical regions: Head and Neck,
Thorax, Abdomen and Pelvis. |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| | CT Models:
● A_Aorta | CT Models:
● A_Aorta |
| | ● A_Aorta_Asc | ● A_Aorta_Asc |
| | ● A_Aorta_Dsc | ● A_Aorta_Dsc |
| | ● A_LAD | ● A_LAD |
| | ● A_Pulmonary | ● Bladder |
| | ● Bladder | ● Bone_Illium_L |
| | ● Bladder_F | ● Bone_Illium_R |
| | ● Bone_Illium_L | ● Bone_Mandible |
| | ● Bone_Illium_R | ● Bowel_Bag |
| | ● Bone_Mandible | ● BrachialPlex_L |
| | ● Bone_Pelvic | ● BrachialPlex_R |
| | ● Bone_Skull | ● Brain |
| | ● Bone_Sternum | ● Brainstem |
| | ● Bowel | ● Breast_L |
| | ● Bowel_Bag | ● Breast_R |
| | ● Bowel_Large | ● Bronchus |
| | ● Bowel_Small | ● Carina |
| | ● BrachialPlex_L | ● CaudaEquina |
| | ● BrachialPlex_R | ● Cavity_Oral |
| | ● Brain | ● Cochlea_L |
| | ● Brainstem | ● Cochlea_R |
| | ● Breast_L | ● Ear_Internal_L |
| | ● Breast_R | ● Ear_Internal_R |
| | ● Bronchus | ● Esophagus |
| | ● BuccalMucosa | ● External |
| | ● Carina | ● Eye_L |
| | ● CaudaEquina | ● Eye_R |
| | ● Cavity_Oral | ● Femur_L |
| | ● Cavity_Oral_Ext | ● Femur_R |
| | ● Chestwall_L | ● Femur_RTOG_L |
| | ● Chestwall_OAR | ● Femur_RTOG_R |
| | ● Chestwall_R | ● Glnd_Lacrimal_L |
| | ● Chestwall_RC_L | ● Glnd_Lacrimal_R |
| | ● Chestwall_RC_R | ● Glnd_Submand_L |
| | ● Cochlea_L | ● Glnd_Submand_R |
| | ● Cochlea_R | ● Glnd_Thyroid |
| | ● Colon_Sigmoid | ● HDR_Cylinder |
| | ● Cornea_L | ● Heart |
| | ● Cornea_R | ● Humerus_L |
| | ● Duodenum | ● Humerus_R |
| | ● Ear_Internal_L | ● Kidney_L |
| | ● Ear_Internal_R | ● Kidney_R |
| | ● Esophagus | ● Kidney_Outer_L |
| | ● External | ● Kidney_Outer_R |
| | ● Eye_L | ● Larynx |
| | ● Eye_R | ● Lens_L |
| Femur_Head_L Femur_Head_R Femur_L Femur_R Femur_RTOG_L Femur_RTOG_R GallBladder Genitals_F Genitals_M Glnd_Lacrimal_L Glnd_Lacrimal_R Glnd_Submand_L Glnd_Submand_R Glnd_Thyroid HDR_Cylinder Heart Hippocampus_L Hippocampus_R Humerus_L Humerus_R Kidney_L Kidney_R Kidney_Outer_L Kidney_Outer_R Larynx Larynx_Glottic Larynx_NRG Larynx_SG Lens_L Lens_R Lips Liver LN_Ax_L LN_Ax_L1_L LN_Ax_L1_R LN_Ax_L2_L LN_Ax_L2_L3_L LN_Ax_L2_L3_R LN_Ax_L2_R LN_Ax_L3_L LN_Ax_L3_R LN_Ax_R LN_IMN_L LN_IMN_R LN_IMN_RC_L LN_IMN_RC_R LN_Inguinofem_L LN_Inguinofem_R LN_Neck_IA LN_Neck_IB-V_L | Lens_R Lips LN_Ax_L LN_Ax_R LN_IMN_L LN_IMN_R LN_Neck_IA LN_Neck_IB-V_L LN_Neck_IB-V_R LN_Neck_II_L LN_Neck_II_R LN_Neck_II-IV_L LN_Neck_II-IV_R LN_Neck_III_L LN_Neck_III_R LN_Neck_IV_L LN_Neck_IV_R LN_Neck_VIA LN_Neck_VIIA_L LN_Neck_VIIA_R LN_Neck_VIIB_L LN_Neck_VIIB_R LN_Pelvics LN_Sclav_L LN_Sclav_R Liver Lung_L Lung_R Marrow_Ilium_L Marrow_Ilium_R Musc_Constrict OpticChiasm OpticNrv_L OpticNrv_R Parotid_L Parotid_R PenileBulb Pituitary Prostate Rectum Rib SeminalVes SpinalCanal SpinalCord Stomach Trachea V_Venacava_S | |
| MR Models: OpticChiasm | | |

9

10

LN_Neck_VIIB_R
LN_Paraaortic
LN_Pelvics
LN_Pelvic_NRG
LN_Sclav_L
LN_Sclav_R
LN_Sclav_RADCOMP_L
LN_Sclav_RADCOMP_R
Lobe_Temporal_L
Lobe_Temporal_R
Rib_R
SeminalVes
SpinalCanal
SpinalCord
Spleen
Stomach
Trachea
UteroCervix
V_Venacava_I
V_Venacava_S
VB
VB_C1
VB_C2
VB_C3
VB_C4
VB_C5
VB_C6
VB_C7
VB_L1
VB_L2
VB_L3
VB_L4
VB_L5
VB_T01
VB_T02
VB_T03
VB_T04
VB_T05
VB_T06
VB_T07
VB_T08
VB_T09
VB_T10
VB_T11
VB_T12
MR Models:
Brainstem
Cerebellum
Eye_L
Eye_R
Gind_Prostate
Hippocampus_L
Hippocampus_R
Hypo_True
Hypothalamus
OpticChiasm
OpticNrv_L
OpticNrv_R
OpticTract_L
OpticTract_R
  • ● Lung_L
  • Lung_R
  • Macula_L ●
  • Macula_R ●
  • Marrow_Ilium_L ●

LN Neck II L

LN_Neck_II_R

LN_Neck_II-IV_L

LN Neck II-IV R

LN_Neck_II-V_L

LN_Neck_II-V_R

LN_Neck_III_L

LN_Neck_III_R

LN_Neck_IV_L

LN_Neck_IV_R

LN_Neck_V_L

LN_Neck_V_R LN_Neck_VIA

LN_Neck_VIIA_L

LN_Neck_VIIA_R

LN_Neck_VIIB_L

. ●

● ●

● ●

● ●

● ●

  • Marrow_Ilium_R ●
  • Musc_Constrict ●
  • Nipple_L ●
  • Nipple_R ●
  • OpticChiasm ●
  • OpticNrv_L
  • OpticNrv_R ●
  • Pancreas ●
  • Parotid_L ●
  • Parotid_R ●
  • PenileBulb ●
  • Pericardium ●
  • Pituitary ●
  • Prostate ●
  • Rectum ●
  • Rectum_F
  • Retina_L ●
  • Retina_R ●
  • Rib ●
  • Rib_L ●
  • OpticNrv R .
  • Brainstem ●
  • Hippocampus_L
  • Hippocampus_R ●

510(k) Submission

11

12

| | • Pituitary
• Prostate
• SeminalVes | |
|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Computer
platform &
Operating
System | Windows based .NET front-end
application that also serves as agent
Uploader supporting Microsoft Windows
10 (64-bit) and Microsoft Windows Server
2016. | Windows based .NET front-end application
that also serves as agent Uploader
supporting Microsoft Windows 10 (64-bit)
and Microsoft Windows Server 2016. |
| | Cloud-based Server based automatic
contouring application compatible with
Linux. | Cloud-based Server based automatic
contouring application compatible with
Linux. |
| | Windows python-based automatic
contouring application supporting
Microsoft Windows 10 (64-bit) and
Microsoft Windows Server 2016. | Windows python-based automatic
contouring application supporting Microsoft
Windows 10 (64-bit) and Microsoft
Windows Server 2016. |

As shown in Table 3, almost all technological characteristics are either substantially equivalent or a subset of the Predicate Device's technological characteristics.

5.7. Discussion of differences

Minor differences

The following minor differences exist, but do not represent any significant additional risks or decreased effectiveness for the device for its intended use:

  • . New CT Models:
    Compared with the Predicate Device, AutoContour Model RADAC V3 supports contouring 90 new models on CT images (the new models are listed below). The addition of these models do not represent a significant deviation from the intended use and operation of AutoContour, nor does it represent a new significant unmitigated risk because:

(a) very similar CNN architecture was used to train these new CT models (b) all new models passed the same DSC test protocol criteria that was applied to the models in the predicate device for similar structure sizes (c) the same risk mitigations that have been applied to the predicate device models have also been applied to all new models

  • o A_Pulmonary
  • o Bladder F
  • o Bone Pelvic
  • o Bone Skull
  • o Bone_Sternum
  • Bowel o
  • Bowel Large O
  • Bowel_Small O

13

  • BuccalMucosa O
  • O Cavity_Oral_Ext
  • Chestwall_L O
  • ChestWall_OAR O
  • O Chestwall R
  • O Chestwall_RC_L
  • Chestwall_RC_R O
  • Colon_Sigmoid O
  • O Cornea L
  • O Cornea_R
  • O Duodenum
  • Femur_Head_L O
  • O Femur_Head_R
  • GallBladder O
  • Genitals_F O
  • Genitals_M O
  • O Hippocampus_L
  • Hippocampus_R O
  • Larynx_Glottic O
  • O Larynx NRG
  • O Larynx_SG
  • LN_Ax_L1_L O
  • LN_Ax_L1_R O
  • LN_Ax_L2_L O
  • LN_Ax_L2_L3_L O
  • LN_Ax_L2_L3_R O
  • LN Ax L2 R O
  • LN_Ax_L3_L O
  • LN_Ax_L3_R O
  • LN_IMN_RC_L O
  • LN_IMN_RC_R O
  • O LN_Inguinofem_L
  • LN_Inguinofem_R O
  • LN_Neck_II-V_L O
  • LN_Neck_II-V_R O
  • LN_Neck_V_L O
  • LN_Neck_V_R O
  • LN_Paraaortic O
  • LN_Pelvics_NRG O
  • LN_Sclav_RC_L O
  • LN_Sclav_RC_R O
  • Lobe Temporal L O
  • O Lobe_Temporal_R
  • Macula_L O

14

  • Macula_R O
  • O Nipple_L
  • Nipple_R O
  • O Pancreas
  • O Pericardium
  • O Rectum_F
  • Retina_L O
  • Retina_R O
  • O Rib L
  • O Rib_R
  • Spleen O
  • UteroCervix O
  • O V_Venacava_l
  • VB O
  • VB_C1 O
  • VB_C2 O
  • VB_C3 O
  • VB_C4 O
  • VB_C5 O
  • VB C6 O
  • VB_C7 O
  • VB_L1 O
  • VB L2 O
  • VB_L3 O O
  • VB_L4 VB L5 O
  • VB T01
  • O O VB_T02
  • VB_T03 O
  • VB T04 O
  • VB T05 O
  • O VB_T06
  • o VB T07
  • VB_T08 O
  • o VB_T09
  • o VB_T10
  • o VB_T11
  • o VB_T12
  • New MR Models:

Compared with the Predicate Device, AutoContour Model RADAC V3 supports contouring 11 new models on MR images (the new models are listed below). The addition of these models do not represent a significant deviation from the intended use and operation of AutoContour, nor does it represent a new significant unmitigated risk because:

15

(a) very similar CNN architecture was used to train these new CT models (b) all new models passed the same DSC test protocol criteria that was applied to the models in the predicate device for similar structure sizes (c) the same risk mitigations that have been applied to the predicate device models have also been applied to all new models

  • O Cerebellum
  • Eye L o
  • Eye R O
  • GInd Prostate O
  • Hypo_True o
  • Hypothalamus o
  • OpticTract L O
  • OpticTract_R O
  • o Pituitary
  • Prostate O
  • SeminalVes O

5.8. Performance Data

The following performance data were provided in support of the substantial equivalence determination.

Sterilization & Shelf-life Testing

AutoContour is a pure software device and is not supplied sterile because the device doesn't come in contact with the patient. AutoContour is a pure software device and does not have a Shelf Life.

Biocompatibility

AutoContour is a pure software device and does not come in contact with the patient.

Electrical safety and electromagnetic compatibility (EMC)

AutoContour is a pure software device, hence no Electromagnetic Compatibility and Electrical Safety testing was conducted for the Subject Device.

Software Verification and Validation Testing

Summary

As with the Predicate Device, no clinical trials were performed for AutoContour Model RADAC V3. Non-clinical tests were performed according to Radformation's AutoContour Complete Test Protocol and Report, which demonstrates that AutoContour Model RADAC V3 performs as intended per its indications for use. Further tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model.

16

Description of Changes to Test Protocol

Changes to the testing protocol between AutoContour RADAC V2 and RADAC V3 were made to improve reviewer independence and the validation dataset for the intended population. These changes better demonstrate the ability of the structure model outputs in assisting the user to contour more efficiently as per AutoContour's Indications for Use.

We do not feel that the changes are a significant deviation from the past report as the primary passing criteria is still based on the same minimum mean DSC score and a qualitative review of the structure model output. For RADAC V3 structure models, additional DSC and qualitative review validation was performed on image data that was acquired uniquely from the data used for training. Additionally, independent reviewers (not employed by Radformation) were used to evaluate the clinical appropriateness of structure models as they would be evaluated for the purposes of treatment planning. This external review was performed as a replacement to intraobserver variability testing done with the RADAC V2 structure models as it better quantified the usefulness of the structure model outputs in an unbiased clinical setting.

The RADAC V3 test protocol also adds a section that addresses the validation of any existing structure models that were approved within previous releases of the software (RADAC V2). This regression testing was added as a way to confirm that updates made to the software made for any new releases do not affect the output of any previously approved models.The addition of this test is not significant to the testing of the new structure models as it only confirms that no structure output "drift" has occurred between version releases.

Testing Summary

Mean Dice Similarity Coefficient (DSC) was used to validate the accuracy of structure model outputs when tested on image data sequestered from the original training data population.The test datasets were independent from those used for training and consisted of approximately 10% of the number of training image sets used as input for the model. For CT structure models there were an average of 373 training and 50 testing image sets. Among the patients used for CT training and testing 51.7% were male and 48.3% female. Patient ages range 11-30 : 0.3%, 31-50 : 6.2%, 51-70 : 43.3%, 71-100 : 50.3%. Race 84.0% White, 12.8% Black or African American, 3.2% Other. CT datasets spanned across treatment subgroups most typically found in a radiation therapy treatment clinic with the most common diagnosis being cancers of the Prostate (21%), Breast (21%), Lung (29%), Head and Neck (16%), Other (13%). Images were acquired using a Philips Big Bore CT simulator with the majority of scans having an average slice thickness of 2mm, In-plane resolution between 1-1.2 mm, and acquisition parameters of 120kVp, 674+/-329 average mAs.

Ground truthing of each test data set were generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.

Structure models were categorized into three size categories as DSC metrics can be sensitive to structure volume. A structure would pass initial validation if the mean

17

DSC exceeded 0.8 for Large volume structures (eg. Bladder, Spleen) 0.65 for Medium volume structures (eg. gallbladder, duodenum) and 0.5 for Small structures (eg Cornea, Retina). For CT Structure models large, medium and small structures resulted in a mean DSC of 0.88+/-0.06, 0.88+/-0.08, and 0.75+/-0.12 respectively. A full summary of the CT structure DSC is available below:

Table 4: CT Training Data Results for AutoContour Model RADAC V3
CT StructureSizePass
Criteria# of
Training
Sets# of
Testing
SetsDSC
(Avg)DSC Std
DevLower Bound
95% Confidence
Interval
A_PulmonaryMedium0.65169430.880.030.83
Bladder_FLarge0.8252630.940.030.89
Bone_PelvicLarge0.8201510.940.010.92
Bone_SkullLarge0.880200.920.010.90
Bone_SternumMedium0.6580200.90.020.87
BowelMedium0.65705450.930.080.80
Bowel_LargeMedium0.65805520.890.170.61
Bowel_SmallMedium0.65705450.930.050.85
BuccalMucosaMedium0.65392980.70.050.62
Cavity_Oral_ExtMedium0.65392980.940.020.91
Chestwall_LLarge0.879200.90.030.85
Chestwall_RLarge0.879200.90.030.85
Chestwall_OARLarge0.8118300.90.030.85
Chestwall_RC_LLarge0.880200.910.040.84
Chestwall_RC_RLarge0.880200.910.040.84
Colon_SigmoidMedium0.65392980.660.280.20
Cornea_LSmall0.5N/A*N/A*N/A*N/A*N/A
Cornea_RSmall0.5N/A*N/A*N/A*N/A*N/A
DuodenumMedium0.65659440.880.160.62
Femur_Head_LMedium0.65160400.950.040.88
Femur_Head_RMedium0.65160400.950.040.88
GallbladderMedium0.65512320.960.030.91
Genitals_FLarge0.8233590.920.020.89
Genitals_MLarge0.8173440.930.030.88
Hippocampus_LMedium0.65226570.670.10.51
Hippocampus_RMedium0.65226570.670.10.51
Larynx_GlotticMedium0.654381100.810.040.74
Larynx_NRGMedium0.654491130.80.040.73
Larynx_SGMedium0.654131040.780.040.71
LN_Ax_L1_LLarge0.84371100.810.060.71
LN_Ax_L1_RLarge0.84371100.810.060.71
LN_Ax_L2_LMedium0.65203510.790.040.72
LN_Ax_L2_L3_LMedium0.654371100.820.060.72
LN_Ax_L2_L3_RMedium0.654371100.820.060.72
LN_Ax_L2_RMedium0.65203510.790.040.72
LN_Ax_L3_LMedium0.65203510.740.070.62
LN_Ax_L3_RMedium0.65203510.740.070.62
LN_IMN_RC_LMedium0.65100250.780.050.70
LN_IMN_RC_RMedium0.65100250.780.050.70
LN_Inguinofem_LLarge0.8310780.850.050.77
LN_Inguinofem_RLarge0.8310780.850.050.77
LN_Neck_II-V_LMedium0.65323810.860.030.81
LN_Neck_II-V_RMedium0.65323810.860.030.81
LN_Neck_V_LMedium0.65267670.80.080.67
LN_Neck_V_RMedium0.65267670.80.080.67
LN_ParaaorticLarge0.8200500.890.040.82
LN_Pelvics_NRGLarge0.8149380.880.020.85
LN_Sclav_RC_LMedium0.65200510.80.040.73
LN_Sclav_RC_RMedium0.65200510.80.040.73
Lobe_Temporal_LLarge0.8174440.880.030.83
Lobe_Temporal_RLarge0.8174440.880.030.83
Macula_LSmall0.5120310.640.10.48
Macula_RSmall0.5120310.640.10.48
Nipple_LMedium0.6591230.740.10.58
Nipple_RMedium0.6591230.740.10.58
PancreasMedium0.65706450.920.10.76
PericardiumLarge0.8160410.940.020.91
Rectum_FMedium0.65252640.910.020.88
Retina_LSmall0.5N/A*N/A*N/A*N/A*N/A*
Retina_RSmall0.5N/A*N/A*N/A*N/A*N/A*
Rib_LLarge0.8N/A*N/A*N/A*N/A*N/A*
Rib_RLarge0.8N/A*N/A*N/A*N/A*N/A*
SpleenLarge0.8160410.920.080.79
UteroCervixMedium0.65143360.810.110.63
V_Venacava_lMedium0.653991000.810.090.66
VBLarge0.81051640.990.010.98
VB_C1Medium0.65196140.900.250.48
VB_C2Medium0.65202130.990.010.97
VB_C3Medium0.65214140.970.030.92
VB_C4Medium0.65234150.900.240.51
VB_C5Medium0.65328200.890.210.55
VB_C6Medium0.65520330.870.240.49
VB_C7Medium0.65651360.980.010.95
VB_L1Large0.8731490.970.090.82
VB_L2Large0.8650440.990.030.93
VB_L3Large0.8582440.990.030.93
VB_L4Large0.8569440.990.010.97
VB_L5Large0.8550430.990.020.95
VB_T01Medium0.65663370.980.010.96
VB_T02Medium0.65682380.960.120.76
VB_T03Medium0.65700380.960.140.73
VB_T04Medium0.65695390.950.160.68
VB_T05Medium0.65687390.970.080.84
VB_T06Medium0.65682360.970.060.86
VB_T07Medium0.65686390.940.160.69
VB_T08Medium0.65729450.960.140.73
VB_T09Medium0.65778490.980.040.92
VB_T10Medium0.65803490.970.090.82
VB_T11Medium0.65814480.980.070.86
VB_T12Medium0.65795500.970.130.75

18

19

*N/A: Structures are generated based on a post-processing/boolean operation from previously released structure models (Eye, Rib) rather than generated from a CNN model. Quantitative testing for these structures is still performed in the external clinical testing below to validate appropriate contour generation and clinical acceptability of these derived structure models.

Additional external clinical testing was performed in order to validate the accuracy of the models on image sets acquired that were unique to the training datasets. Publically available CT datasets from The Cancer Imaging Archive (TCIA archive) were used and both AutoContour and manually added ground truth contours following the

20

same structure guidelines used for structure model training were added to the image sets.

Table 5: CT External Clinical Dataset References
Model GroupData Source IDData Citation
CT PelvisTCIA - Pelvic-RefAfua A. Yorke, Gary C. McDonald, David Solis Jr., Thomas Guerrero. (2019)
Pelvic Reference Data. The Cancer Imaging Archive. DOI:
10.7937/TCIA.2019.woskq5oo
CT Head and
NeckTCIA -
Head-Neck-PET-CTMartin Vallières, Emily Kay-Rivest, Léo Jean Perrin, Xavier Liem, Christophe
Furstoss, Nader Khaouam, Phuc Félix Nguyen-Tan, Chang-Shu Wang, Khalil
Sultanem. (2017). Data from Head-Neck-PET-CT. The Cancer Imaging Archive.
doi: 10.7937/K9/TCIA.2017.8oje5q00
CT AbdomenTCIA - Pancreas-CT-CBHong, J., Reyngold, M., Crane, C., Cuaron, J., Hajj, C., Mann, J., Zinovoy, M.,
Yorke, E., LoCastro, E., Apte, A. P., & Mageras, G. (2021). Breath-hold CT and
cone-beam CT images with expert manual organ-at-risk segmentations from
radiation treatments of locally advanced pancreatic cancer [Data set]. The
Cancer Imaging Archive. https://doi.org/10.7937/TCIA.ESHQ-4D90
CT ThoraxTCIA - NSCLCAerts, H. J. W. L., Wee, L., Rios Velazquez, E., Leijenaar, R. T. H., Parmar, C.,
Grossmann, P., Carvalho, S., Bussink, J., Monshouwer, R., Haibe-Kains, B.,
Rietveld, D., Hoebers, F., Rietbergen, M. M., Leemans, C. R., Dekker, A.,
Quackenbush, J., Gillies, R. J., Lambin, P. (2019). Data From
NSCLC-Radiomics [Data set]. The Cancer Imaging Archive.
https://doi.org/10.7937/K9/TCIA.2015.PF0M9REI
CT ThoraxTCIA - LCTSCYang, J., Sharp, G., Veeraraghavan, H., Van Elmpt, W., Dekker, A., Lustberg, T.,
& Gooding, M. (2017). Data from Lung CT Segmentation Challenge (Version 3)
[Data set]. The Cancer Imaging Archive.
https://doi.org/10.7937/K9/TCIA.2017.3R3FVZ08

DSC values were calculated between ground truth contour data and AutoContour structures and rated on the same DSC passing criteria used for the Training DSC validation. All structures passed the minimum DSC criteria for small, medium and large structures with an mean DSC of 0.79+/-0.11, 0.83+/-0.12, and 0.90+/-0.09 respectively: Additionally, the qualitative clinical appropriateness of AutoContour structures generated on these scans was graded by clinical experts. Autocontour structures were graded on a scale from 1 to 5 where 5 refers to contour requiring no additional edits, and 1 refers to a score in which full manual re-contour of the structure would be required. An average score >= 3 was used to determine whether a structure model would ultimately be beneficial clinically. An average rating of 4.5 was found across all CT structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use.

21

Table 6: CT External Reviewer Results for AutoContour Model RADAC V3
CT StructureSizePass
Criteria#
Testings
SetsAverage
DSCAverage
DSC Std.
DevLower
Bound 95%
Confidence
IntervalExternal
Reviewer
Average
Rating (1-5)
A_PulmonaryMedium0.65200.930.020.894.6
Bladder_FLarge0.8200.870.220.524.8
Bone_PelvicLarge0.8410.930.040.864.4
Bone_SkullLarge0.8230.980.010.974.5
Bone_SternumMedium0.65200.920.030.884.8
BowelMedium0.65460.860.070.754.3
Bowel_LargeMedium0.65460.810.070.704.1
Bowel_SmallMedium0.65460.770.070.664.0
BuccalMucosaMedium0.65230.680.100.514.2
Cavity_Oral_ExtMedium0.65230.970.010.954.9
Chestwall_LLarge0.8200.880.100.714.4
Chestwall_RLarge0.8200.900.050.824.4
Chestwall_OARLarge0.8200.940.030.894.8
Chestwall_RC_LLarge0.8200.910.050.835.0
Chestwall_RC_RLarge0.8200.910.030.855.0
Colon_SigmoidMedium0.65400.680.200.354.2
Cornea_LSmall0.5230.800.040.734.8
Cornea_RSmall0.5230.790.060.694.8
DuodenumMedium0.65250.720.150.484.8
Femur_Head_LMedium0.65410.940.060.854.9
Femur_Head_RMedium0.65410.930.070.824.8
GallbladderMedium0.65210.860.050.784.8
Capitale_FLarge0.8200.890.040.834.5

22

Genitals_MLarge0.8210.960.030.924.7
Hippocampus_LMedium0.65230.850.080.714.6
Hippocampus_RMedium0.65230.870.080.744.6
Larynx_GlotticMedium0.65230.830.100.665.0
Larynx_NRGMedium0.65230.930.050.854.7
Larynx_SGMedium0.65230.810.100.654.8
LN_Ax_L1_LLarge0.8200.860.100.704.4
LN_Ax_L1_RLarge0.8200.870.080.744.3
LN_Ax_L2_LMedium0.65200.740.050.654.1
LN_Ax_L2_L3_LMedium0.65200.860.070.754.3
LN_Ax_L2_L3_RMedium0.65200.870.040.804.5
LN_Ax_L2_RMedium0.65200.750.050.674.0
LN_Ax_L3_LMedium0.65200.730.090.574.2
LN_Ax_L3_RMedium0.65200.750.080.614.2
LN_IMN_RC_LMedium0.65200.840.090.683.9
LN_IMN_RC_RMedium0.65190.810.140.583.9
LN_Inguinofem_LLarge0.8400.910.090.774.1
LN_Inguinofem_RLarge0.8380.900.080.774.1
LN_Neck_II-V_LMedium0.65230.980.010.974.5
LN_Neck_II-V_RMedium0.65230.980.010.964.5
LN_Neck_V_LMedium0.65230.790.040.724.6
LN_Neck_V_RMedium0.65230.760.070.654.4
LN_ParaaorticLarge0.8230.880.050.794.6
LN_Pelvics_NRGMedium0.65410.790.180.484.4
LN_Sclav_RC_LMedium0.65200.720.160.454.3
LN_Sclav_RC_RMedium0.65200.700.140.474.2
Lobe_Temporal_LLarge0.8230.870.050.794.6
Lobe_Temporal_RLarge0.8230.880.050.814.6
Macula_LSmall0.5230.700.260.275.0
Macula_RSmall0.5230.720.230.345.0
Nipple_LMedium0.65350.780.110.605.0
Nipple_RMedium0.65380.780.170.505.0
PancreasMedium0.65250.710.150.474.1
PericardiumLarge0.8200.990.000.984.7
Rectum_FMedium0.65200.900.090.764.2
Retina_LSmall0.5230.880.030.835.0
Retina_RSmall0.5230.880.030.845.0
Rib_LLarge0.8200.850.040.784.9
Rib_RLarge0.8200.850.040.794.8
SpleenLarge0.8240.950.070.844.9
Stomach (Update)Large0.8250.900.060.804.9
CaudaEquina (Update)Medium0.65420.880.060.774.5
UteroCervixMedium0.65140.740.230.363.8
V_Venacava_IMedium0.65410.810.080.684.8
VBLarge0.8630.960.030.914.4
VB_C1Medium0.65230.950.020.924.6
VB_C2Medium0.65240.960.020.924.7
VB_C3Medium0.65260.930.100.774.5
VB_C4Medium0.65330.870.200.544.6
VB_C5Medium0.65400.870.160.614.6
VB_C6Medium0.65420.900.070.784.6
VB_C7Medium0.65420.930.040.874.8
VB_L1Large0.8320.880.220.534.7
VB_L2Large0.8250.890.210.554.7
VB_L3Large0.8200.920.190.604.9
VB_L4Large0.8200.930.210.584.8
VB_L5Large0.8200.920.200.604.4
VB_T01Medium0.65430.940.130.724.7
VB_T02Medium0.65430.950.050.864.5
VB_T03Medium0.65430.930.120.734.9
VB_T04Medium0.65430.930.140.694.6
VB_T05Medium0.65440.900.200.584.6
VB_T06Medium0.65430.880.220.514.7
VB_T07Medium0.65370.830.290.364.7
VB_T08Medium0.65270.790.330.254.7
VB_T09Medium0.65230.790.320.264.7
VB_T10Medium0.65230.810.310.294.7
VB_T11Medium0.65240.820.310.304.7
VB_T12Medium0.65310.860.240.464.8

23

24

The MR training data set used for initial testing of the Brain models (Cerebellum, Hypothalamus, Hypo_True, OpticTract_L, OpticTract_R, Pituitary, Eye_L, Eye_R) had an average of 274 training image sets and 92 testing image sets and were acquired from the Cancer Imaging Archive GLIS-RT dataset. These data sets consisted primarily of glioblastoma and astrocytoma patients. Images were acquired on either a GE Signa HDxT (3T) or Siemens Skyra (3T) scanner and had an average slice thickness of 1mm, In-plane resolution between 0.5-1.0 mm, and acquisition parameters of TR=2.3-8.9ms, TE=3.0-3.2s.

The MR training data used for initial testing of the MR Pelvis models (Prostate, Glnd_Prostate, and SeminalVes) was taken from the Cancer Imaging Archive Prostate-MRI-US-Biopsy dataset which consisted patients with a of suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings. The images used were T2-Axial image sets acquired on a 3T Siemens Skyra scanner. The majority of pulse

25

sequences used are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing.

Table 7: MR Initial Testing Dataset References
Model GroupData Source IDData Citation
MR BrainMR - RenownShusharina, N., & Bortfeld, T. (2021). Glioma Image Segmentation for
Radiotherapy: RT targets, barriers to cancer spread, and organs at risk [Data
set]. The Cancer Imaging Archive. https://doi.org/10.7937/TCIA.T905-ZQ20
MR PelvisGold Atlas PelvisNatarajan, S., Priester, A., Margolis, D., Huang, J., & Marks, L. (2020).
Prostate MRI and Ultrasound With Pathology and Coordinates of Tracked
Biopsy (Prostate-MRI-US-Biopsy) [Data set]. The Cancer Imaging Archive.
DOI: 10.7937/TCIA.2020.A61IOC1A

Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing.

Ground truthing of each test data set was generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist. For MR Structure models, a mean training DSC of 0.87+/-0.07 was found for medium models and 0.74+/-0.07 for small models.

Table 8: MR Training Data Results for AutoContour Model RADAC V3
MR ModelsSizePass
CriteriaDSC
(Avg)DSC Std
Dev (Avg)Lower Bound
95%
Confidence
Interval
CerebellumMedium0.650.930.010.91
GInd_ProstateMedium0.650.870.040.80
HypothalamusSmall0.500.790.040.72
Hypo_TrueSmall0.500.710.060.61
OpticTract_LSmall0.500.720.080.59
OpticTract_RSmall0.500.720.080.59
PituitarySmall0.500.750.110.57
ProstateMedium0.650.890.030.84
SeminalVesMedium0.650.740.130.53
Eye_LMedium0.650.900.100.74
Eye_RMedium0.650.900.100.74

26

Additional external clinical testing was performed in order to validate the accuracy of the models on image sets acquired that were unique to the training datasets.

Table 9: MR External Clinical Dataset References
Model GroupData Source IDData Citation
MR BrainMR - RenownN/A
MR PelvisGold Atlas PelvisNyholm, Tufve, Stina Svensson, Sebastian Andersson, Joakim Jonsson,
Maja Sohlin, Christian Gustafsson, Elisabeth Kjellén, et al. 2018. "MR
and CT Data with Multi Observer Delineations of Organs in the Pelvic
Area - Part of the Gold Atlas Project." Medical Physics 12 (10): 3218-21.
doi:10.1002/mp.12748.

For the Brain models, datasets acquired via data-use agreement from a clinical partner were acquired containing 20 MR T1 Ax post (BRAVO) image scans acquired with a GE MR750w scanner. Images had an average slice thickness of 1.6mm, In-plane resolution between 0.94 mm, and acquisition parameters of TR=5.98ms, TE=96.8s. Data for testing of the MR Pelvis structure models were acquired from a publicly available Gold Atlas Data set which contained 19 images of patients with prostate or rectal cancer. lmages were acquired on a GE DISCOVERY MR750w (Ax T2 FRFSE) with in-plane resolution of 0.9mm, Slice thickness of 2.5mm, TR=5.988s and TE=96.8ms.

DSC values were calculated between ground truth contour data and AutoContour structures and rated on the same DSC passing criteria as was used for the training DSC validation. All structures passed the minimum DSC criteria for small and medium structures with a mean DSC of 0.74+/-0.07, 0.87+/-0.07 respectively: Additionally, the qualitative clinical appropriateness of AutoContour structures generated on these scans was graded by clinical experts. Autocontour structures were graded on a scale from 1 to 5 where 5 refers to contour requiring no additional edits, and 1 refers to a score in which full manual re-contour of the structure would be required. An average score >= 3 was used to determine whether a structure model would ultimately be beneficial clinically. An average rating of 4.4 was found across all MR structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use.

27

Table 10: MR External Reviewer Results for AutoContour Model RADAC V3
MR ModelsSizePass
Criteria# External
Test Data
SetsAverage
DSCAverage
DSC Std.
DevLower
Bound 95%
Confidence
IntervalExternal
Reviewer
Average
Rating
(1-5)
CerebellumMedium0.65200.930.010.914
Glnd_ProstateMedium0.65180.870.040.804.3
HypothalamusSmall0.50200.790.040.724.2
Hypo_TrueSmall0.50190.710.060.614.3
OpticTract_LSmall0.50200.720.080.594.4
OpticTract_RSmall0.50190.720.080.594.5
PituitarySmall0.50190.750.110.574.2
ProstateMedium0.65190.890.030.844.2
SeminalVesMedium0.65190.740.130.534.2
Eye_LMedium0.65190.900.100.744.9
Eye_RMedium0.65200.900.100.744.9

Validation testing of the AutoContour application demonstrated that the software meets user needs and intended uses of the application.

Mechanical and Acoustic Testing Not Applicable (Standalone Software)

Not Applicable (Standalone Software)

Animal Study

No animal studies were conducted using the Subject Device, AutoContour.

Clinical Studies

No clinical studies were conducted using the Subject Device, AutoContour

5.9. Conclusion

AutoContour Model RADAC V3 is deemed substantially equivalent to the Predicate Device, AutoContour Model RADAC V2 (K220598). Verification and Validation testing and the Risk Management Report demonstrate that AutoContour Model RADAC V3 is as safe and effective as the Predicate Device. The technological characteristics table demonstrates the similarity between AutoContour Model RADAC V3 and the Predicate Device and does not raise any questions on the safety and effectiveness of the Subject Device.