(175 days)
AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.
As with AutoContour RADAC, the AutoContour RADAC V2 device is software that uses DICOM-compliant image data (CT or MR) as input to (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical orqans of the head and neck, thorax, abdomen and pelvis for adult male and female patients.(2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system
AutoContour RADAC V2 consists of 3 main components:
-
- A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
-
- A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
-
- A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.
The provided text describes the acceptance criteria and the study that proves the device, AutoContour RADAC V2, meets these criteria. Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The acceptance criterion for contouring accuracy is measured by the Mean Dice Similarity Coefficient (DSC), which varies based on the estimated volume of the structure.
| Structure Size Category | DSC Acceptance Criteria (Mean) | Reported Device Performance (Mean DSC +/- STD) |
|---|---|---|
| Large volume structures | > 0.80 | 0.94 +/- 0.03 |
| Medium volume structures | > 0.65 | 0.82 +/- 0.09 |
| Small volume structures | > 0.50 | 0.61 +/- 0.14 |
The document also provides detailed DSC results for each contoured structure, which all meet or exceed their respective size category's acceptance criteria. For example, for "A_Aorta" (Large), the reported DSC Mean is 0.91, which is >0.80. For "Brainstem" (Medium), the reported DSC Mean is 0.90, which is >0.65. For "OpticChiasm" (Small), the reported DSC Mean is 0.63, which is >0.50.
2. Sample size used for the test set and the data provenance
- CT Test Set:
- Sample Size: An average of 140 test image sets per CT structure model, constituting 20% of the training images. The specific number of test data sets for each CT structure is provided in the table (e.g., A_Aorta: 60, Bladder: 372).
- Data Provenance:
- Country of Origin: Not explicitly stated, but the patient demographics suggest diverse origins, likely within the US, given the prevalence of specific cancers and racial demographics. The acquisition was done using a Philips Big Bore CT simulator.
- Retrospective or Prospective: Not explicitly stated, but common in such validation studies, the data is typically retrospective patient data.
- Demographics: 51.7% male, 48.3% female. Age range: 11-30 (0.3%), 31-50 (6.2%), 51-70 (43.3%), 71-100 (50.3%). Race: 84.0% White, 12.8% Black or African American, 3.2% Other.
- Clinical Relevance: Data spanned across common radiation therapy treatment subgroups (Prostate, Breast, Lung, Head and Neck cancers).
- MR Test Set:
- Sample Size: An average of 16 test image sets per MR structure model. Specific numbers are not provided for each MR structure, but the total validation set for sensitivity and specificity was 16 datasets.
- Data Provenance:
- Country of Origin: Massachusetts General Hospital, Boston, MA.
- Retrospective or Prospective: The text states "These training sets consisted primarily of glioblastoma and astrocytoma cases from the Cancer Imaging Archive (TCIA) Glioma data set." and that "The testing dataset was acquired at a different institution using a different scanner and sequence parameters", implying retrospective data collection from existing archives/institutions.
- Demographics: 56% Male and 44% Female patients, with ages ranging from 20-80. No Race or Ethnicity data was provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Three clinically experienced experts.
- Qualifications: Two radiation therapy physicists and one radiation dosimetrist.
4. Adjudication method for the test set
- Method: Ground truthing of each test dataset was generated manually using consensus (NRG/RTOG) guidelines, as appropriate, by the three clinically experienced experts. This implies a form of expert consensus adjudication.
5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not conducted. The performance data focuses on the software's standalone accuracy (Dice Similarity Coefficient, sensitivity, and specificity). The text states: "As with the Predicate Device, no clinical trials were performed for AutoContour RADAC V2."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, the primary performance evaluation provided is for the software's standalone performance, measured by the Dice Similarity Coefficient (DSC), sensitivity, and specificity of the auto-generated contours against expert-established ground truth. The study explicitly states, "Further tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model." This is a validation of the algorithm's performance.
7. The type of ground truth used
- Type of Ground Truth: Expert consensus of manually contoured structures, established using NRG/RTOG (Radiation Therapy Oncology Group) guidelines. This is a form of expert consensus.
8. The sample size for the training set
- CT Training Set: An average of 700 training image sets per CT structure model. The specific number of training data sets for each CT structure is provided in the table (e.g., A_Aorta: 240, Bladder: 1000).
- MR Training Set: An average of 81 training image sets for MR structure models.
9. How the ground truth for the training set was established
- The document implies that the ground truth for the training set was also established manually, similar to the test set, as it states "Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing." It is standard practice for medical imaging AI to train on expertly contoured data. While not explicitly detailed for the training set, the consistency in ground truth methodology for both training and testing in such submissions suggests expert manual contouring based on established guidelines would have been used for training as well.
- Source for MR Training Data: Primarily glioblastoma and astrocytoma cases from The Cancer Imaging Archive (TCIA) Glioma data set.
{0}------------------------------------------------
Image /page/0/Picture/0 description: The image contains the logo of the U.S. Food and Drug Administration (FDA). On the left is the Department of Health & Human Services logo. To the right of that is the FDA logo, which is a blue square with the letters "FDA" in white. To the right of the blue square is the text "U.S. FOOD & DRUG ADMINISTRATION" in blue.
Radformation, Inc. % Kurt Sysock Co-founder/CEO 335 Madison Avenue, 4th floor New York, New York 10017
Re: K220598
Trade/Device Name: AutoContour Model RADAC V2 Regulation Number: 21 CFR 892.2050 Regulation Name: Medical Image Management And Processing System Regulatory Class: Class II Product Code: QKB Dated: July 22, 2022 Received: July 27, 2022
Dear Kurt Sysock:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part
{1}------------------------------------------------
801); medical device reporting of medical device-related adverse events) (21 CFR 803) for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reportingcombination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4. Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.
For comprehensive regulatory information about mediation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatoryassistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely,
for
Julie Sullivan, Ph.D. Assistant Director DHT8C: Division of Radiological Imaging and Radiation Therapy Devices OHT8: Office of Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
Indications for Use
510(k) Number (if known) K220598
Device Name AutoContour RADAC V2
Indications for Use (Describe)
AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.
Type of Use (Select one or both, as applicable)
| Prescription Use (Part 21 CFR 801 Subpart D) |
|---|
| Over-The-Counter Use (21 CFR 801 Subpart C) |
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
{3}------------------------------------------------
This 510(k) Summary has been created per the requirements of the Safe Medical Device Act (SMDA) of 1990, and the content is provided in conformance with 21 CFR Part 807.92.
5.1. Submitter's Information
| Table 1 : Submitter's Information | |
|---|---|
| Submitter's Name: | Kurt Sysock |
| Company: | Radformation, Inc. |
| Address: | 335 Madison Avenue, 4th FloorNew York, NY 10017 |
| Contact Person: | Alan NelsonChief Science Officer, Radformation |
| Phone: | 518-888-5727 |
| Fax: | ---------- |
| Email: | anelson@radformation.com |
| Date of Summary Preparation | 08/22/2022 |
5.2. Device Information
| Table 2 : Device Information | |
|---|---|
| Trade Name: | AutoContour RADAC V2 |
| Common Name: | Radiological Image Processing Software For RadiationTherapy |
| Classification Name: | Class II |
| Classification: | Medical image management and processing system |
| Regulation Number: | 892.2050 |
| Product Code: | QKB |
| Classification Panel: | Radiology |
5.3. Predicate Device Information
AutoContour RADAC V2 (Subject Device) makes use of its prior submission -AutoContour RADAC (K200323) - as the Predicate Device. Contour ProtégéAl (K213976) - as a Reference Device.
{4}------------------------------------------------
5.4. Device Description
As with AutoContour RADAC, the AutoContour RADAC V2 device is software that uses DICOM-compliant image data (CT or MR) as input to (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical orqans of the head and neck, thorax, abdomen and pelvis for adult male and female patients.(2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system
AutoContour RADAC V2 consists of 3 main components:
-
- A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
-
- A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
-
- A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.
5.5. Indications for Use
AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.
5.6. Technological Characteristics
Subject Device, AutoContour RADAC V2 makes use of a Predicate Device, AutoContour RADAC (K200323) as the Predicate Device for substantial equivalence comparison. The functionality and technical components of this prior submission remain unchanged in AutoContour RADAC V2. This submission is intended to build on the functionality and technological components of the 510(k) cleared AutoContour RADAC.
5.6.1. Updates vs. AutoContour (K200323)
The updated submissions expanded the machine-learning based contouring and manual ROI manipulation to Magnetic Resonance (MR) image types as well as expanded the number of supported CT structure models. Additionally, image registration is expanded to include deformable image registration for the purposes of transferring structure contours from one image set to another.
{5}------------------------------------------------
| Table 3: Technological CharacteristicsAutoContour RADAC V2 vs. AutoContour RADAC (K200323) | ||
|---|---|---|
| Characteristic | Subject Device: AutoContourRADAC V2 | Predicate Device:AutoContour RADAC(K200323) |
| Design: Imageregistration | Manual and Automatic Rigidregistration. AutomaticDeformable Registration | Manual Rigid registration. |
| Design:Supportedmodalities | CT or MR input for contouringor registration/fusion.PET/CT input forregistration/fusion only. DICOMRTSTRUCT for output | CT input for contouring ormanual registration/fusion.MR, PET/CT input formanual registration/fusiononly. DICOM RTSTRUCTfor output |
| Regions andVolumes ofinterest (ROI) | CT or MR input for contouring ofanatomical regions: Head andNeck, Thorax, Abdomen andPelvis.CT Models:A_Aorta A_Aorta_Asc A_Aorta_Dsc A_LAD Bladder Bone_Ilium_L Bone_Ilium_R Bone_Mandible Bowel_Bag BrachialPlex_L BrachialPlex_R Brain Brainstem Breast_L Breast_R Bronchus Carina CaudaEquina Cavity_Oral Cochlea_L Cochlea_R Ear_Internal_L Ear_Internal_R Esophagus External Eye_L Eye_R Femur_L Femur_R Femur_RTOG_L | CT input for contouring ofanatomical regions: Headand Neck, Thorax,Abdomen and Pelvis.CT Models:Bladder Bone_Mandible Brain Brainstem Bronchus Bronchus_Main Carina Cavity_Oral Cochlea_L Cochlea_R Esophagus Eye_L Eye_R Femur_L Femur_R Glnd_Lacrimal_L Glnd_Lacrimal_R Glnd_Submand_L Glnd_Submand_R Heart Kidney_L Kidney_R Lens_L Lens_R Liver Lung_L Lung_R OpticNrv_L OpticNrv_R Parotid_L |
| Femur_RTOG_R Glnd_Lacrimal_L Glnd_Lacrimal_R Glnd_Submand_L Glnd_Submand_R Glnd_Thyroid HDR_Cylinder Heart Humerus_L Humerus_R Kidney_L Kidney_R Kidney_Outer_L Kidney_Outer_R Larynx Lens_L Lens_R Lips LN_Ax_L LN_Ax_R LN_IMN_L LN_IMN_R LN_Neck_IA LN_Neck_IB-V_L LN_Neck_IB-V_R LN_Neck_II_L LN_Neck_II_R LN_Neck_II-IV_L LN_Neck_II-IV_R LN_Neck_III_L LN_Neck_III_R LN_Neck_IV_L LN_Neck_IV_R LN_Neck_VIA LN_Neck_VIIA_L LN_Neck_VIIA_R LN_Neck_VIIB_L LN_Neck_VIIB_R LN_Pelvics LN_Sclav_L LN_Sclav_R Liver Lung_L Lung_R Marrow_Ilium_L Marrow_Ilium_R Musc_Constrict OpticChiasm OpticNrv_L OpticNrv_R Parotid_L Parotid_R PenileBulb Pituitary Prostate Rectum Rib SeminalVes SpinalCanal SpinalCord Stomach | Parotid_R Prostate Rectum SpinalCord Stomach | |
| Trachea V_Venacava_S MR Models: OpticChiasm OpticNrv_L OpticNrv_R Brainstem Hippocampus_L Hippocampus_R | Agent Uploader supporting Microsoft Windows 10 (64-bit) and Microsoft Windows Server 2016.Cloud-based Server based automatic contouring application compatible with Linux.Web application Server based application compatible with Linux with frontend compatible with all modern web browsers. | |
| Computer platform & operating system | Windows based .NET front-end application that also serves as agent Uploader supporting Microsoft Windows 10 (64-bit) and Microsoft Windows Server 2016.Cloud-based Server based automatic contouring application compatible with Linux.Windows python-based automatic contouring application supporting Microsoft Windows 10 (64-bit) and Microsoft Windows Server 2016. |
{6}------------------------------------------------
{7}------------------------------------------------
As shown in Table 3, almost all technological characteristics are either substantially equivalent or a subset of the Predicate Device' technological characteristics.
5.7. Discussion of differences
Subset of the Predicate Device
The comparison table above shows that several features of AutoContour RADAC V2 are minor expansions of features that were previously submitted, and therefore these differences do not create new questions regarding the safety and effectiveness of the device relative to the previous submission.
Minor differences
The following minor differences exist, but do not represent any significant additional risks or decreased effectiveness for the device for its intended use:
- Design: Supported modalities: AutoContour RADAC V2 now supports MR Image types along with CT for automatic contouring. Similar contour validation was performed on MR models as was performed previously with CT-based contour models and therefore do not represent any significant additional risks or decreased effectiveness for the device for its
{8}------------------------------------------------
intended use. The following MR models are supported in AutoContour RADAC V2:
- o OpticChiasm
- o OpticNrv L
- OpticNrv R O
- റ Brainstem O
- Hippocampus L Hippocampus R
- Design: Image Registration: AutoContour RADAC V2 supports automatic deformable imaqe registration along with rigid registration. Similar registration validation was performed on automatic deformable image registration as was performed previously and therefore does not represent any significant additional risks or decreased effectiveness for the device for its intended use.
- Compatibility with the environment and other devices / Computer platform & operating svstem: Local Automatic Contouring Processor: AutoContour RADAC V2 allows for automatic contouring to be generated locally. Automatic Contour alqorithms and models are identical to that available on the Cloud-Based servers and therefore do not represent any significant additional risks or decreased effectiveness for the device for its intended use.
- Compatibility with the environment and other devices / Computer platform & operating system: Local Automatic Contouring Processor: The AutoContour RADAC V2 front-end interface and agent uploader is built using a .NET Framework application compatible with Windows devices. The updated front-end platform and uploader agent do not represent any significant additional risks or decreased effectiveness for the device for its intended use.
- New CT Models: ●
Compared with the predicate device, AutoContour RADAC V2 supports contouring 58 new models on CT images (the new models are listed below).
The addition of these models do not represent a significant deviation from the intended use and operation of AutoContour, nor does it represent a new significant unmitigated risk because:
(a) the same CNN architecture was used to train these new CT models (b) all new models passed the same DSC test protocol criteria for similar structure sizes. All models were also tested with sensitivity and specificity analysis, inter-observer variability testing, and expert user evaluation like the predicate device and the results of these tests support substantial equivalence in the effectiveness of the new models compared with the predicate device models.
(c) the same risk mitigations that have been applied to the predicate device models have also been applied to all new models including
{9}------------------------------------------------
appropriate labeling mitigations and a process for required review and approval of structures in the application prior to being able to export them.
- O A_Aorta
- A_Aorta_Asc o
- O A Aorta Dsc
- A_LAD O
- O Bone Ilium L
- O Bone_llium_R
- Bowel Bag O
- O BrachialPlex L
- O BrachialPlex_R
- O Breast L
- O Breast R
- CaudaEquina O
- Ear Internal L O
- Ear_Internal_R O
- External O
- O Femur RTOG L
- O Femur_RTOG_R
- O Gind Thyroid
- HDR_Cylinder O
- O Humerus_L
- Humerus_R O
- Kidney Outer L O
- O Kidney_Outer_R
- Larynx O
- O Lips
- O LN Ax L
- O LN_Ax_R
- O LN_IMN_L
- LN_IMN_R O
- LN Neck IA O
- O LN_Neck_IB-V_L
- LN Neck IB-V R O
- O LN_Neck_II_L
- O LN_Neck_II_R
- O LN_Neck_II-IV_L
- LN Neck II-IV R O
- O LN_Neck_III_L
- LN_Neck_III_R O
- LN Neck IV L O
- O LN_Neck_IV_R
- O LN Neck VIA
- O LN_Neck_VIIA_L
{10}------------------------------------------------
- LN Neck VIIA R O
- LN_Neck_VIIB_L O
- LN Neck VIIB R O
- LN Pelvics O
- O LN Sclav L
- O LN Sclav R
- Marrow llium L O
- Marrow_llium_R O
- Musc Constrict O
- PenileBulb O
- Pituitary O
- O Rib
- O SeminalVes
- SpinalCanal O
- O Trachea
- V_Venacava_S O
5.8. Performance Data
The following performance data were provided in support of the substantial equivalence determination.
Sterilization & Shelf-life Testing
AutoContour is a pure software device and is not supplied sterile because the device doesn't come in contact with the patient. AutoContour is a pure software device and does not have a Shelf Life.
Biocompatibility
AutoContour is a pure software device and does not come in contact with the patient.
Electrical safety and electromagnetic compatibility (EMC)
AutoContour is a pure software device hence, no Electromagnetic Compatibility and Electrical Safety testing was conducted for the Subject Device.
Software Verification and Validation Testing
As with the Predicate Device, no clinical trials were performed for AutoContour RADAC V2. Non-clinical tests were performed to demonstrate that AutoContour RADAC V2 performs as intended per its indications for use. Further tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model.
{11}------------------------------------------------
Mean Dice Similarity Coefficient (DSC) was used to validate the accuracy of structure model outputs within three size categories. As DSC is sensitive to structure volume, the validation passing criteria was set at a mean DSC exceeding 0.80 for Large volume structures (eg. Liver, Lung), 0.65 for Medium volume structures (eg. Parotid, Eye), and 0.5 for Small structures (eg OpticChiasm, Lens). For CT Large, Medium, and Small structures, AutoContour's results had a mean DSC of 0.94+/-0.03, 0.82+/-0.09, and 0.61+/-0.14 respectively:
| Structure | #TrainingData Sets | # TestData Sets | Size | DSCMean | DSC STD | LowerBound 95%ConfidenceInterval |
|---|---|---|---|---|---|---|
| A_Aorta | 240 | 60 | Large | 0.91 | 0.03 | 0.86 |
| A_Aorta_Asc | 240 | 60 | Large | 0.90 | 0.03 | 0.85 |
| A_Aorta_Dsc | 240 | 60 | Large | 0.93 | 0.02 | 0.90 |
| A_LAD | 461 | 116 | Small | 0.57 | 0.13 | 0.36 |
| Bladder | 1000 | 372 | Large | 0.92 | 0.09 | 0.77 |
| Bone_Ilium_L | 120 | 31 | Large | 0.94 | 0.01 | 0.92 |
| Bone_Ilium_R | 120 | 31 | Large | 0.94 | 0.01 | 0.92 |
| Bone_Mandible | 230 | 58 | Medium | 0.90 | 0.03 | 0.85 |
| Bowel_Bag | 131 | 33 | Large | 0.93 | 0.04 | 0.86 |
| BrachialPlex_L | 78 | 20 | Medium | 0.73 | 0.08 | 0.60 |
| BrachialPlex_R | 78 | 20 | Medium | 0.73 | 0.08 | 0.60 |
| Brain | 1000 | 28 | Large | 0.96 | 0.01 | 0.94 |
| Brainstem | 236 | 60 | Medium | 0.90 | 0.02 | 0.87 |
| Breast_L | 462 | 116 | Large | 0.93 | 0.04 | 0.86 |
| Breast_R | 462 | 116 | Large | 0.93 | 0.04 | 0.86 |
| Bronchus | 200 | 50 | Medium | 0.73 | 0.09 | 0.58 |
| Carina | 2312 | 578 | Medium | 0.82 | 0.08 | 0.69 |
| CaudaEquina | 87 | 22 | Medium | 0.90 | 0.02 | 0.87 |
| Cavity_Oral | 532 | 133 | Medium | 0.83 | 0.1 | 0.67 |
| Cochlea_L | 106 | 26 | Small | 0.65 | 0.1 | 0.49 |
| Cochlea_R | 106 | 26 | Small | 0.65 | 0.1 | 0.49 |
| Ear_Internal_L | 1289 | 324 | Small | 0.64 | 0.21 | 0.29 |
| Ear_Internal_R | 1289 | 324 | Small | 0.64 | 0.21 | 0.29 |
| Esophagus | 1116 | 279 | Medium | 0.76 | 0.13 | 0.55 |
| Structure | #TrainingData Sets | # TestData Sets | Size | DSCMean | DSC STD | LowerBound 95%ConfidenceInterval |
| External | 3173 | 826 | Large | 0.99 | 0.04 | 0.92 |
| Eye_L | 336 | 85 | Medium | 0.92 | 0.02 | 0.89 |
| Eye_R | 336 | 85 | Medium | 0.92 | 0.02 | 0.89 |
| Femur_L | 1315 | 330 | Large | 0.96 | 0.06 | 0.86 |
| Femur_R | 1315 | 330 | Large | 0.96 | 0.06 | 0.86 |
| Femur_RTOG_L | 1315 | 330 | Large | 0.96 | 0.06 | 0.86 |
| Femur_RTOG_R | 1315 | 330 | Large | 0.96 | 0.06 | 0.86 |
| Glnd_Lacrimal_L | 353 | 86 | Small | 0.54 | 0.18 | 0.24 |
| Glnd_Lacrimal_R | 353 | 86 | Small | 0.54 | 0.18 | 0.24 |
| Glnd_Submand_L | 814 | 43 | Medium | 0.82 | 0.15 | 0.57 |
| Glnd_Submand_R | 814 | 43 | Medium | 0.82 | 0.15 | 0.57 |
| Glnd_Thyroid | 169 | 43 | Medium | 0.79 | 0.06 | 0.69 |
| HDR_Cylinder | 15 | 4 | Large | 0.97 | 0.01 | 0.97 |
| Heart | 2060 | 515 | Large | 0.93 | 0.05 | 0.85 |
| Humerus_L | 451 | 114 | Large | 0.95 | 0.02 | 0.92 |
| Humerus_R | 451 | 114 | Large | 0.95 | 0.02 | 0.92 |
| Kidney_L | 1083 | 271 | Medium | 0.94 | 0.03 | 0.89 |
| Kidney_R | 1083 | 271 | Medium | 0.94 | 0.03 | 0.89 |
| Kidney_Outer_L | 590 | 148 | Medium | 0.93 | 0.05 | 0.85 |
| Kidney_Outer_R | 590 | 148 | Medium | 0.93 | 0.05 | 0.85 |
| Larynx | 172 | 43 | Medium | 0.85 | 0.05 | 0.77 |
| Lens_L | 1114 | 278 | Small | 0.66 | 0.14 | 0.43 |
| Lens_R | 1114 | 278 | Small | 0.66 | 0.14 | 0.43 |
| Lips | 432 | 110 | Small | 0.52 | 0.16 | 0.26 |
| LN_Ax_L | 437 | 110 | Medium | 0.83 | 0.07 | 0.71 |
| LN_Ax_R | 437 | 110 | Medium | 0.83 | 0.07 | 0.71 |
| LN_IMN_L | 390 | 97 | Medium | 0.68 | 0.07 | 0.56 |
| LN_IMN_R | 390 | 97 | Medium | 0.68 | 0.07 | 0.56 |
| LN_Neck_IA | 272 | 68 | Medium | 0.78 | 0.06 | 0.68 |
| LN_Neck_IB-V_L | 316 | 79 | Medium | 0.86 | 0.05 | 0.78 |
| LN_Neck_IB-V_R | 316 | 79 | Medium | 0.86 | 0.05 | 0.78 |
| Structure | # TrainingData Sets | # TestData Sets | Size | DSC Mean | DSC STD | LowerBound 95%ConfidenceInterval |
| LN_Neck_II_L | 271 | 68 | Medium | 0.84 | 0.04 | 0.77 |
| LN_Neck_II_R | 271 | 68 | Medium | 0.84 | 0.04 | 0.77 |
| LN_Neck_II-IV_L | 325 | 82 | Medium | 0.86 | 0.03 | 0.81 |
| LN_Neck_II-IV_R | 325 | 82 | Medium | 0.86 | 0.03 | 0.81 |
| LN_Neck_III_L | 328 | 83 | Medium | 0.80 | 0.09 | 0.65 |
| LN_Neck_III_R | 328 | 83 | Medium | 0.80 | 0.09 | 0.65 |
| LN_Neck_IV_L | 328 | 82 | Medium | 0.77 | 0.07 | 0.65 |
| LN_Neck_IV_R | 328 | 82 | Medium | 0.77 | 0.07 | 0.65 |
| LN_Neck_VIA | 262 | 66 | Medium | 0.79 | 0.07 | 0.67 |
| LN_Neck_VIIA_L | 272 | 69 | Medium | 0.71 | 0.07 | 0.59 |
| LN_Neck_VIIA_R | 272 | 69 | Medium | 0.71 | 0.07 | 0.59 |
| LN_Neck_VIIB_L | 332 | 84 | Medium | 0.79 | 0.06 | 0.69 |
| LN_Neck_VIIB_R | 332 | 84 | Medium | 0.79 | 0.06 | 0.69 |
| LN_Pelvics | 502 | 126 | Medium | 0.87 | 0.05 | 0.79 |
| LN_Sclav_L | 460 | 115 | Medium | 0.88 | 0.05 | 0.8 |
| LN_Sclav_R | 460 | 115 | Medium | 0.88 | 0.05 | 0.8 |
| Liver | 480 | 120 | Large | 0.96 | 0.02 | 0.93 |
| Lung_L | 3491 | 748 | Large | 0.97 | 0.02 | 0.94 |
| Lung_R | 3491 | 748 | Large | 0.97 | 0.02 | 0.94 |
| Marrow_Ilium_L | 121 | 31 | Large | 0.91 | 0.02 | 0.88 |
| Marrow_Ilium_R | 121 | 31 | Large | 0.91 | 0.02 | 0.88 |
| Musc_Constrict | 272 | 69 | Medium | 0.75 | 0.06 | 0.65 |
| OpticChiasm | 158 | 40 | Small | 0.63 | 0.07 | 0.51 |
| OpticNrv_L | 741 | 185 | Small | 0.51 | 0.18 | 0.21 |
| OpticNrv_R | 741 | 185 | Small | 0.51 | 0.18 | 0.21 |
| Parotid_L | 739 | 48 | Medium | 0.82 | 0.04 | 0.75 |
| Parotid_R | 739 | 48 | Medium | 0.82 | 0.04 | 0.75 |
| PenileBulb | 232 | 58 | Small | 0.76 | 0.09 | 0.61 |
| Pituitary | 201 | 41 | Small | 0.68 | 0.09 | 0.53 |
| Prostate | 708 | 177 | Medium | 0.86 | 0.04 | 0.79 |
| Rectum | 1436 | 359 | Medium | 0.88 | 0.05 | 0.8 |
| Structure | #TrainingData Sets | # TestData Sets | Size | DSCMean | DSC STD | LowerBound 95%ConfidenceInterval |
| Rib | 64 | 17 | Large | 0.87 | 0.02 | 0.84 |
| SeminalVes | 236 | 60 | Medium | 0.79 | 0.07 | 0.67 |
| SpinalCanal | 87 | 22 | Large | 0.90 | 0.02 | 0.87 |
| SpinalCord | 1000 | 24 | Medium | 0.68 | 0.09 | 0.53 |
| Stomach | 431 | 83 | Large | 0.88 | 0.07 | 0.76 |
| Trachea | 196 | 49 | Medium | 0.87 | 0.05 | 0.79 |
| V_Venacava_S | 162 | 41 | Medium | 0.81 | 0.06 | 0.71 |
{12}------------------------------------------------
{13}------------------------------------------------
{14}------------------------------------------------
These DSC results were compared with state-of-the-art contouring results published in the literature as well as to the Reference Device MIM Contour ProtégéAl (K213976) and were found to be consistent with the state-of-the-art for those structures which had such data available.
The test datasets were independent from that used for training and consisted of 20% of the number of training images sets used as input for the model. For CT structure models there were an average of 700 training and 140 testing image sets. Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing. Among the patients used for CT testing 51.7% were male and 48.3% female. Patient ages range 11-30 : 0.3%, 31-50 : 6.2%, 51-70 : 43.3%, 71-100 : 50.3%. Race 84.0% White, 12.8% Black or African American, 3.2% Other. CT testing data spanned across treatment subgroups most typically found in a radiation therapy treatment clinic with the most common diagnosis being cancers of the Prostate (21%), Breast (21%), Lung (29%), Head and Neck (16%), Other (13%). CT datasets used for testing were acquired using a Philips Big Bore CT simulator with the maiority of scans having an average slice thickness of 2mm. In-plane resolution between 1-1.2 mm, and acquisition parameters of 120kVp, 674+/-329 average mAs. Ground truthing of each test data set were generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.
Sensitivity and specificity for each CT structure model was evaluated on 50 unique patients independent of the training data. 75 structure models had a sensitivity of 100%, 14 models had a sensitivity > 95%, 3 models had a sensitivity > 90%, and 1 model had a sensitivity > 85%.
For CT specificity, 29 structure models had a specificity of 100%, 14 models had a specificity > 90%, 12 models had a specificity > 80%, and the remaining 38 models ranged from 0 to 80%. Note that false positives generally occur when the structure is not actually in the image, and in such cases this issue is mitiqated by AutoContour's user-specified structure template system that filters structure results that are not selected by the user.
{15}------------------------------------------------
The MR Testing data set had an average of 81 training image sets and 16 testing image sets. These training sets consisted primarily of glioblastoma and astrocytoma cases from the Cancer Imaging Archive (TCIA) Glioma data set ( Shusharina, N., & Bortfeld, T. (2021). Glioma Image Segmentation for Radiotherapy: RT targets, barriers to cancer spread, and organs at risk [Data set]. The Cancer Imaging Archive. https://doi.org/10.7937/TCIA.T905-ZQ20). Datasets used for testing were removed from the training dataset pool before model training began, and used exclusively for testing. For MR structure model validation, the testing dataset was acquired at a different institution using a different scanner and sequence parameters (GE Signa HDX, BRAVO sequence) compared to that used by the training dataset (Siemens Skyra MPRAGE sequence). Training and testing data was composed of 56% Male and 44% Female patients with ages ranging from 20-80. No Race or Ethnicity data was provided as a part of this study. The source location is Massachusetts General Hospital, Boston, MA.
MR datasets used for testing had an average slice thickness of 1mm, In-plane resolution between 0.5-1.0 mm, and acquisition parameters of TR=8.9ms, TE=3.2s. Ground truthing of each test data set were generated manually using consensus (NRG/RTOG) guidelines as appropriate by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist. For MR Structure models a mean DSC of 0.67+/-0.08 was found across all structure models.
| Structure | Size | PassCriteria(DSCMean) | DSCMean | DSCSTD | LowerBound 95%ConfidenceInterval |
|---|---|---|---|---|---|
| Brainstem | Medium | 0.65 | 0.90 | 0.02 | 0.87 |
| OpticChiasm | Small | 0.50 | 0.53 | 0.11 | 0.35 |
| OpticNrv_L | Small | 0.50 | 0.64 | 0.08 | 0.51 |
| OpticNrv_R | Small | 0.50 | 0.64 | 0.08 | 0.51 |
| Hippocampus_R | Medium | 0.65 | 0.65 | 0.09 | 0.50 |
| Hippocampus_L | Medium | 0.65 | 0.65 | 0.09 | 0.50 |
Sensitivity and Specificity was evaluated on the 16 validation MR datasets that were not included in the training. All structure models had a sensitivity of 100% and a specificity of >85% in these test cases.
Limitations of AutoContour related to structure contouring performance and limitations of the training data are disclosed in the product labeling. AutoContour also mitigates risk of incorrect contours being propagated into a treatment plan through a review process that requires users to review every slice and approve the structure prior to being able to export it to a treatment planning system.
{16}------------------------------------------------
Validation testing of the AutoContour RADAC V2 device supports substantial equivalence to the predicate device AutoContour RADAC (K200323) and shows that the device performs as well as the reference device MIM Contour ProtégéAI (K213976) in automatic segmentation.
Mechanical and Acoustic Testing Not Applicable (Standalone Software)
Not Applicable (Standalone Software)
Animal Study
No animal studies were conducted using the Subject Device, AutoContour.
Clinical Studies
No clinical studies were conducted using the Subject Device, AutoContour
5.9. Conclusion
Based on this Discussion and Testing and Performance Data, the subject device is determined to be as safe and effective as its predicate device AutoContour RADAC (K200323) and performs as well in automatic segmentation performance as the reference device, MIM Contour ProtégéAl (K213976).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).