Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K242745
    Date Cleared
    2025-03-27

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K231765, K223774, K211881

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.

    Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.

    The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.

    The software is not intended to automatically detect or contour lesions.

    Device Description

    AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.

    CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).

    AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).

    Table 3: Acceptance Criteria of AIRC Organs RT VA50

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Summary)
    Organs in Predicate DeviceAll organs segmented in the predicate device are also segmented in the subject device.Confirmed. The device continued to segment all organs previously handled by the predicate.
    The average (AVG) Dice score difference between the subject and predicate device is < 3%.Confirmed. "For existing organs, the average (AVG) Dice score difference between the subject device and predicate device is smaller than 3%."
    New Organs for Subject DeviceThe subject device in the selected reference metric has a higher value than the defined baseline value.Confirmed. "The performance results of the subject device for the new CT organs are comparable to the reference literature & cleared devices. Here equivalence for the new organs is defined such that the selected reference metric has a higher value than the defined baseline."

    Table 3: Performance Summary of the Subject Device CT Contouring (Overall Average Dice Coefficients)

    Anatomic RegionAvg Dice (%)Std Dice (%)95% CI
    Head & Neck76.114.3[75.1, 77.2]
    Head & Neck lymph nodes69.313.9[68.7, 70.0]
    Thorax76.915.8[76.2, 77.6]
    Abdomen87.310.1[86.3, 88.2]
    Pelvis85.79.6[85.0, 86.5]
    Cardiac75.615.1[74.1, 77.1]

    Table 4: Detailed Performance Evaluation of the New Organs in the Subject Device (Selected Examples)

    Organ NameNo.AVG Dice (%)STD Dice (%)MED Dice (%)95%CI DiceAVG ASSD (mm)STD ASSD (mm)MED ASSD (mm)95%CI ASSD
    Left Breast3090.43.891[89, 91.8]2.42.21.8[1.5, 3.2]
    Right Breast3090.23.790.8[88.8, 91.5]1.90.71.8[1.7, 2.2]
    Bowel Bag33953.696.5[93.7, 96.3]1.91.51.4[1.4, 2.5]
    Pituitary3075.87.477[73.1, 78.6]0.70.30.6[0.5, 0.8]
    Brainstem3088.42.588.8[87.5, 89.3]10.30.9[0.9, 1.1]
    Esophagus3085.64.286[84, 87.2]0.60.30.6[0.5, 0.7]
    MEDIASTINAL LN 9L3138.321.142.9[30.6, 46.1]5.34.43.7[3.7, 6.9]

    (Note: The full Table 4 from the document provides detailed performance for all 37 new organs. This table includes a selection for illustrative purposes.)

    2. Sample Sizes and Data Provenance

    • Test Set Sample Size:
      • CT Contouring Algorithm: N = 579 cases
      • MR Contouring Algorithm: The MR algorithm is unchanged from the predicate, so its performance is unchanged. The predicate was validated using 66 cases.
    • Data Provenance (CT Contouring Algorithm Test Set):
      • Geographic Origin (Overall N=579): Data from multiple clinical sites across North American, South American, Asia, Australia, and Europe.
      • Example Cohorts (Table 5: Validation Testing Data Information based on Cohort):
        • Cohort A.1 (N=73): Germany (14), Brazil (59)
        • Cohort A.2 (N=40): Canada (40)
        • Cohort A.3 (N=301): South/North America (184), EU (44), Asia (33), Australia (28), Unknown (12)
        • Cohort B (N=165): South/North America (100), EU (51), Asia (6), Australia (3), Unknown (5)
      • Retrospective/Prospective: "retrospective performance study on CT data previously acquired for RT treatment planning."

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Number of Experts for Ground Truth: "a team of experienced annotators mentored by radiologists or radiation oncologists" for initial manual annotation. "a board-certified radiation oncologist" performed a quality assessment including review and correction of each annotation. The document does not specify an exact number of individuals for these teams, but describes the roles and qualifications.
    • Qualifications of Experts:
      • "experienced annotators"
      • "radiologists or radiation oncologists" (mentors for annotators)
      • "board-certified radiation oncologist" (for quality assessment/review)

    4. Adjudication Method for the Test Set

    The document describes the ground truth establishment process as: "manual annotation" by experienced annotators mentored by radiologists/radiation oncologists, followed by a "quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This indicates a hierarchical review/correction process rather than a multi-reader consensus adjudication between equally-weighted readers (e.g., 2+1 or 3+1). The final accepted contour after the board-certified radiation oncologist's review served as the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was described. The study focused on the standalone performance of the AI algorithm against established ground truth and comparison with a predicate device and literature. The document does not mention an effect size of how much human readers improve with AI vs. without AI assistance. The intended use specifies that the AI-generated contours must be reviewed, edited, and accepted by trained medical professionals, implying a human-in-the-loop workflow, but the validation study presented focuses on the AI's autonomous segmentation accuracy.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was done. The performance metrics (Dice coefficient, ASSD) and the comparison to an expert-established ground truth demonstrate the algorithm's autonomous segmentation capability. The study validated the "autocontouring algorithms" and their performance.

    7. Type of Ground Truth Used

    The ground truth used for the test set was expert consensus / manual annotation based on clinical guidelines. Specifically: "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation." This was further reviewed and corrected by a board-certified radiation oncologist.

    8. Sample Size for the Training Set

    The document provides the sample sizes for the training set for new organs introduced:

    • Table 6: Training Dataset Characteristics (Examples):
      • Lacrimal Glands Left/Right: 247
      • Pituitary Gland: 247
      • Humeral Head Left/Right: 207
      • Bowel Bag: 544
      • Pelvic Bone Left/Right: 160
      • Sacrum: 160
      • Mediastinal LN (various): 136
      • Femoral Head Left/Right: 160
      • Brainstem: 247
      • Esophagus: 247
      • Breast Left/Right: 172
      • Supraglottic Larynx: 247
      • Glottis: 247

    The total training set size for all organs is not explicitly summed, but these numbers indicate the scale of the training data used for the specific new organs.

    9. How the Ground Truth for the Training Set Was Established

    "In both the annotation process for the training and validation testing data, the annotation protocols for the OAR were defined following the applicable guidelines. The ground truth annotations were drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool. Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."

    This indicates the same rigorous process of expert manual annotation and review was applied to establish ground truth for the training set as for the test set. The validation testing and training data were explicitly stated to be independent.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232928
    Date Cleared
    2024-05-07

    (230 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K223774

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DeepContour is a deep learning based medical imaging software that allows trained healthcare professionals to use DeepContour as a tool to automatically process CT images. In addition, DeepCoutour is suitable for the following conditions:

    1. Creation of contours using deep-learning algorithms , support quantitative analysis, organ HU distribution statistics, transfer contour files to TPS, and create management archives for patients.
    2. Analvze the anatomical structure at different anatomical positions.
    3. Rigid and elastic registration based on CT.
    4. 3D reconstruction, editing and other visual tools based on organ contours
    Device Description

    DeepContour is a deep learning based medical imaging software that allows trained healthcare professionals to use DeepContour as a tool to automatically process CT images. DeepContour contouring workflow supports CT input data and produces RTSTRUCT outputs. The organ segmentation can also be combined into templates, which can be customized by different hospitals according to their needs. DeepContour provides an interactive contouring application to edit and review the contours automatically generated by DeepContour.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the DeepContour (V1.0) device, based on the provided FDA 510(k) Summary:

    Acceptance Criteria and Reported Device Performance

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" as a set of predefined quantitative thresholds the device must meet. Instead, the study's aim is to demonstrate that DeepContour's performance is equivalent to or better than the predicate devices. The primary metric used for this comparison is the Dice coefficient, and the implicit acceptance criterion is that DeepContour's performance is not significantly worse than the predicates.

    The equivalence definition is stated as: "the lower bound of 95th percentile confidence interval of the subject device segmentation is greater than 0.1 Dice lower than the mean of predicate device segmentation."

    Below is a table summarizing the reported Dice coefficients for DeepContour and the predicate devices for a selection of structures. It also includes the summary Average Symmetric Surface Distance (ASSD) comparison.

    Table 1: Acceptance Criteria (Implicit) and Reported Device Performance

    MetricImplicit Acceptance CriteriaDeepContour Reported Performance (Mean ± Std (95% CI Lower Bound))Predicate (AI-Rad CAI-Rad Companion Organs RT) Reported Performance (Mean ± Std)Predicate (Contour ProtégéAI) Reported Performance (Mean ± Std)
    Dice CoefficientLower 95th percentile CI of DeepContour segmentation > (Mean of Predicate Segmentation - 0.1 Dice)See "Clinical performance comparison" tables below for specific structures.See "Clinical performance comparison" tables below for specific structures.See "Clinical performance comparison" tables below for specific structures.
    ASSD (median)Median ASSD comparable to predicate devices.0.95 (95% CI: [0.85, 1.13])0.96 (95% CI: [0.84, 1.15])0.95 (95% CI: [0.86, 1.17])

    Table 5: Clinical performance comparison (Peking Union Medical College Hospital) - Selected Structures

    Structure:DeepContourAI-Rad CAI-RadCompanion Organs RT(K221305)Contour ProtégéAI(K223774)
    Brain0.98±0.01(0.97)0.93±0.110.98 ± 0.01
    BrainStem0.91±0.03(0.89)0.90±0.020.82 ± 0.09
    Eye_L0.89±0.02(0.88)0.81±0.060.87 ± 0.06
    Lung_L0.98±0.05(0.96)0.92±0.160.96 ± 0.02
    Heart0.93±0.16(0.90)0.91±0.060.90 ± 0.07
    Liver0.96±0.07(0.95)0.86±0.170.93 ± 0.07
    Kidney_L0.92±0.03(0.91)0.82±0.130.92 ± 0.05
    Pancreas0.86±0.01(0.86)0.87±0.030.45 ± 0.22
    Bladder0.95±0.15(0.93)0.87±0.150.52 ± 0.19
    Prostate0.87±0.02(0.85)0.74 ± 0.120.85 ± 0.06
    SpinalCord0.93±0.01(0.92)0.66 ± 0.140.63±0.16

    Table 6: Clinical performance comparison (LCTSC American public datasets) - Selected Structures

    Structure:DeepContourAI-Rad CAI-RadCompanion Organs RT(K221305)ContourProtégéAI(K223774)
    SpinalCord0.92±0.02(0.91)0.64±0.130.62 ± 0.21
    Lung L0.97±0.15(0.96)0.90±0.130.95 ± 0.05
    Heart0.92±0.11(0.90)0.91±0.040.90 ± 0.04
    Esophagus0.89±0.13(0.86)0.75±0.130.68 ± 0.19

    Table 7: Clinical performance comparison (Pancreas-CT American public datasets) - Selected Structures

    Structure:DeepContourAI-Rad CAI-RadCompanion Organs RT(K221305)ContourProtégéAI(K223774)
    Spleen0.90±0.05(0.88)0.91±0.120.89 ± 0.08
    Pancreas0.85±0.03(0.83)0.84±0.020.43 ± 0.25
    Kidney_L0.93±0.02(0.91)0.84±0.030.92 ± 0.17
    Liver0.97±0.03(0.97)0.85±0.130.92 ± 0.06
    Stomach0.85±0.02(0.84)0.80±0.050.81 ± 0.17

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: 203 CT images.
      • 100 clinical datasets
      • 103 American public datasets (60 from LCTSC, 43 from Pancreas-CT)
    • Data Provenance:
      • 100 clinical datasets: Retrospectively collected from Peking Union Medical College Hospital (China).
      • 103 American public datasets: Publicly available datasets originally from American sources.
        • 2017 Lung CT Segmentation Challenge (LCTSC): 60 thoracic CT scan patients.
        • Pancreas-CT (PCT): 43 abdominal contrast-enhanced CT scan patients.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • For the 100 clinical datasets (China): Two radiation oncologists with more than 10 years of clinical practice established the ground truth annotations. Their detailed CVs are in Appendix 2 (not provided in the input, but referenced).
    • For the 103 American public datasets: Annotated by American doctors. (Specific qualifications not detailed in the provided text).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • For the 100 clinical datasets (China): The ground truth was established by two radiation oncologists. A third qualified internal staff member was available to adjudicate if needed. This implies a 2+1 adjudication method if there was disagreement.
    • For the 103 American public datasets: No explicit adjudication method is mentioned, only that they were "annotated by American doctors."

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The provided text does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study involving human readers with and without AI assistance to measure improvement in human performance. The study focuses on the standalone performance of the AI algorithm (DeepContour) compared to predicate devices.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done. The entire "Performance comparison" section (Tables 5, 6, 7, and 8) details the Dice coefficients and ASSD values for the DeepContour algorithm, directly comparing its segmentation performance against the ground truth and the predicate devices. There is no human reader involved in generating the DeepContour results reported in these tables.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • For the 100 clinical datasets (China): Expert consensus (two radiation oncologists applying RTOG and clinical guidelines using manual annotation, with a third available for adjudication).
    • For the 103 American public datasets: Expert annotation by American doctors. (Implied expert consensus or single expert annotation from the original dataset creation process, as described by the original publications).

    8. The sample size for the training set

    • # of Datasets: 800 CT images.
      • 200 for head and neck region
      • 200 for chest region
      • 200 for abdomen region
      • 200 for pelvic region
      • (Out of these, 160 cases per region were used for training, and 40 cases per region for validation.)

    9. How the ground truth for the training set was established

    The initial segmentations were reviewed and corrected by two radiation oncologists for model training, with a third qualified internal staff member available to adjudicate if needed. This indicates an expert review and correction process, likely similar to the 2+1 adjudication method used for the test set ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1