Search Filters

Search Results

Found 39 results

510(k) Data Aggregation

    K Number
    K250427
    Date Cleared
    2025-05-28

    (103 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TAIMedImg DeepMets is a software device intended to assist trained medical professionals by providing initial object contours on axial T1-weighted contrast-enhanced (T1WI+C) brain magnetic resonance (MR) images to accelerate workflow for radiation therapy treatment planning.

    TAIMedImg DeepMets is intended only for patients with known (imaging diagnosed) brain metastases (BM) when cancer cells spread from primary site to the brain. It is not intended to be used with images of other brain tumors or other body parts. The software is intended for use with BM lesions with a diameter of ≥ 10 mm.

    TAIMedImg DeepMets uses an artificial intelligence algorithm to contour images and offers automated segmentation for Gross Tumor Volume (GTV) contours of brain metastases. The software is an adjunctive tool and not intended for replacing the users' current standard practice of manual contouring process. All automatic output generated by the software shall be thoroughly reviewed by a trained medical professional prior to delivering any therapy or treatment. The physician retains the ultimate responsibility for making the final diagnosis and treatment decision.

    TAIMedImg DeepMets is intended to be used by medical professionals trained in the use of the device.

    Only DICOM images of adult patients are considered valid input. DeepMets does not support DICOM images of patients that have one of the following exclusions:

    • (i) presence of prior craniotomy
    • (ii) patients with clinical imaging diagnosis of brain tumors other than BM
    • (iii) Images with patient motion: excessive motion leading to artifacts that make the scan technically inadequate

    Medical professionals must finalize (confirm or modify) the contours generated by TAIMedImg DeepMets, as necessary, using an external platform available at the facility that supports DICOM-RT viewing/editing functions, such as image visualization software and treatment planning system.

    Device Description

    TAIMedImg DeepMets is a software application system intended for use in the contouring (segmentation) of brain magnetic resonance (MR) images. The device comprises an AI inference module and a DICOM Radiotherapy Structure Sets (RTSS, or RTSTRUCT) converter module.

    The AI inference module consists of image preprocessing, deep learning neural networks, and postprocessing components, and is intended to contour brain metastasis on the axial T1-weighted contrast-enhanced (T1WI+C) MR images. It utilizes deep learning neural networks to generate contours and annotations for the diagnosed brain metastases.

    The DICOM RTSS converter module converts the contours, annotations, along with metadata, into a standard DICOM-RTSTRUCT file, making it compatible with radiotherapy treatment planning systems.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for TAIMedImg DeepMets, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Device Performance

    MetricReported Device Performance (Mean)95% Confidence IntervalAcceptance CriteriaSource
    Lesion-Wise Sensitivity (Se) (%)89.97(86.51, 93.43)> 80Deep learning
    False-Positive Rate (FPR) (FPs/case)0.354(0.215, 0.481)
    Ask a Question

    Ask a specific question about this device

    K Number
    K242925
    Device Name
    MR Contour DL
    Manufacturer
    Date Cleared
    2025-04-01

    (189 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MR Contour DL generates a Radiotherapy Structure Set (RTSS) DICOM with segmented organs at risk which can be used by trained medical professionals. It is intended to aid in radiation therapy planning by generating initial contours to accelerate workflow for radiation therapy planning. It is the responsibility of the user to verify the processed output contours and user-defined labels for each organ at risk and correct the contours/labels as needed. MR Contour DL is intended to be used with images acquired on MR scanners, in adult patients.

    Device Description

    MR Contour DL is a post processing application intended to assist a clinician by generating contours of organ at risk (OAR) from MR images in the form of a DICOM Radiotherapy Structure Set (RTSS) series. MR Contour DL is designed to automatically contour the organs in the head/neck, and in the pelvis for Radiation Therapy (RT) planning of adult cases. The output of the MR Contour DL is intended to be used by radiotherapy (RT) practitioners after review and editing, if necessary, and confirming the accuracy of the contours for use in radiation therapy planning.

    MR Contour DL uses customizable input parameters that define RTSS description, RTSS labeling, organ naming and coloring. MR Contour DL does not have a user interface of its own and can be integrated with other software and hardware platforms. MR Contour DL has the capability to transfer the input and output series to the customer desired DICOM destination(s) for review.

    MR Contour DL uses deep learning segmentation algorithms that have been designed and trained specifically for the task of generating organ at risk contours from MR images. MR Contour DL is designed to contour 37 different organs or structures using the deep learning algorithms in the application processing workflow.

    The input of the application is MR DICOM images in adult patients acquired from compatible MR scanners. In the user-configured profile, the user has the flexibility to choose both the covered anatomy of input scan and the specific organs for segmentation. The proposed device has been tested on GE HealthCare MR data.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for MR Contour DL:

    1. Table of Acceptance Criteria and Reported Device Performance

    Device: MR Contour DL

    MetricOrgan Anatomy RegionAcceptance CriteriaReported Performance (Mean)Outcome
    DICE Similarity Coefficient (DSC)Small Organs (e.g., chiasm, inner-ear)≥ 50%67.4% - 98.8% (across all organs)Met
    Medium Organs (e.g., brainstem, eye)≥ 65%79.6% - 95.5% (across relevant organs)Met
    Large Organs (e.g., bladder, head-body)≥ 80%90.3% - 99.3% (across relevant organs)Met
    95th percentile Hausdorff Distance (HD95) ComparisonAll OrgansImproved or Equivalent to Predicate DeviceImproved or Equivalent in 24/28 organs analyzed; average HD95 of 4.7 mm (
    Ask a Question

    Ask a specific question about this device

    K Number
    K242745
    Date Cleared
    2025-03-27

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.

    Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.

    The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.

    The software is not intended to automatically detect or contour lesions.

    Device Description

    AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.

    CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).

    AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).

    Table 3: Acceptance Criteria of AIRC Organs RT VA50

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Summary)
    Organs in Predicate DeviceAll organs segmented in the predicate device are also segmented in the subject device.Confirmed. The device continued to segment all organs previously handled by the predicate.
    The average (AVG) Dice score difference between the subject and predicate device is
    Ask a Question

    Ask a specific question about this device

    K Number
    K242994
    Date Cleared
    2025-02-24

    (151 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OncoStudio provides deep-learning-based automatic contouring to organs at risk in DICOM-RT format from CT images. This software could be used as an initial contouring for the clinicians to be confirmed by the radiation oncology department for treatment planning or other professions where a segmented mask of organs is needed.

    • Deep learning contouring from Head & Neck, Thorax, Abdomen, and Pelvis
    • Generates DICOM-RT structure of contoured objects
    • Manual Contouring
    • Receive, transmit, store, retrieve, display, and process medical images and DICOM objects
    Device Description

    OncoStudio is a standalone software that provides deep-learning-based automatic contouring to organs at risk in DICOM-RT format from CT images. This software could be used as an initial contouring for the clinicians to be confirmed by the radiation oncology department for treatment planning or other professions where a segmented mask of organs is needed.

    • Deep learning contouring from Head & Neck, Thorax, Abdomen, and Pelvis
    • Generates DICOM-RT structure of contoured objects
    • Manual Contouring
    • Receive, transmit, store, retrieve, display, and process medical images and DICOM objects
      It also has the following general functions:
    • Patient management;
    • Review of processed images;
    • Open and Save of files.
    AI/ML Overview

    Based on the provided text, here's a description of the acceptance criteria and the study that proves the device meets those criteria for OncoStudio (OS-01):

    The submission details a standalone performance test conducted to demonstrate the contouring capabilities of OncoStudio, an AI-powered software for automatic organ at risk contouring from CT images. The primary evaluation metric for acceptance was the Dice coefficient (DSC).

    1. Acceptance Criteria and Reported Device Performance

    The text explicitly states: "For the structures being compared, the mean Dice coefficient (DSC) of structures for each anatomical region (Head & Neck, Thorax, Abdomen, and Pelvis) should meet the established criteria." However, the specific numerical established criteria for the mean Dice coefficient for each anatomical region (Head & Neck, Thorax, Abdomen, and Pelvis) are not reported in the provided document. Similarly, the actual reported device performance (the mean DSC achieved for each region) is not explicitly stated in the visible sections.

    To fully answer this, a table would look like this, but with missing data based on the provided text:

    Anatomical RegionAcceptance Criteria (Mean Dice Coefficient)Reported Device Performance (Mean Dice Coefficient)
    Head & NeckNot specified in textNot reported in text
    ThoraxNot specified in textNot reported in text
    AbdomenNot specified in textNot reported in text
    PelvisNot specified in textNot reported in text

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 310 CT images.
      • 140 images from Yonsei Severance Hospital (Republic of Korea)
      • 121 images from OneMedNet (U.S.A.)
      • 49 images from University Hospital Basel (Switzerland)
    • Data Provenance: The data is from South Korea, U.S.A., and Switzerland. The text specifies it was "collected from Yonsei Severance Hospital (Republic of Korea), OneMedNet (U.S.A.), and University Hospital Basel (Switzerland)". The data from OneMedNet is a "purchased set of CT data, mainly comprised of U.S.A. population." Yonsei Severance Hospital is in South Korea, and the Basel data is known as the TotalSegmentator dataset.
    • Retrospective or Prospective: Not explicitly stated, but the description of data collection "from the years 2012, 2016, and 2020 from the University Hospital Basel through picture archiving and communication system (PACS)" implies a retrospective collection for at least part of the dataset.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three radiation oncologists established the ground truth segmentations for the test set.
    • Qualifications of Experts (for Yonsei Severance Hospital and OneMedNet data): The radiation oncologists had "3-20 years of clinical practice," and included "associate professor, assistant professor, and radiation oncologist resident from two institutions (Yonsei Cancer Center, Samsung Seoul Hospital)."
    • Qualifications of Experts (for University Hospital Basel data): The ground truth segmentation was "supervised by two physicians with 3 (M.S.) and 6 years (H.B.) of experience in body imaging, respectively." (Note: this refers to the public dataset from Basel, which was used for training, but the text states for the test set that "Ground truth segmentations were established by three radiation oncologists following international clinical guidelines" without distinguishing the origin for the test set ground truth specifically in terms of expert type, likely implying the former expert group applied to the test set as well for consistency).

    4. Adjudication Method for the Test Set

    The ground truthing process for the Yonsei Severance Hospital and OneMedNet data (which largely comprises the test set) was:

    • "First, the 1 radiation oncologist manually delineated the organs."
    • "Second, segmentation results generated by 1 radiation oncologist are sequentially edited and confirmed by 2 radiation oncologists. In this editing process, the first radiation oncologist makes corrections, and the corrected results are received and finalized by another radiation oncologist."

    This indicates a sequential review and confirmation process rather than a strict 2+1 or 3+1 consensus, with an initial delineator and then two subsequent reviewers/editors, likely leading to a consensus by the end of the process.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not mentioned in the provided text. The study described is a standalone performance test of the algorithm.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance test was done. The text explicitly states: "A standalone performance test was conducted to compare the contouring capabilities of OncoStudio."

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus/manual annotation by radiation oncologists/physicians following international clinical guidelines (RTOG and clinical guidelines).

    8. The Sample Size for the Training Set

    • Total Training Data: 2,128 images.
      • 731 images from Yonsei Severance Hospital (Republic of Korea)
      • 194 images from OneMedNet (U.S.A)
      • 1203 images from University Hospital Basel (Switzerland)

    The total collected data was 2,438 datasets (315 US, 871 Korea, 1252 Europe). From this, 310 data were allocated for the test dataset, and the remaining 2,128 were used for training.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established similarly to the test set:

    • For Yonsei Severance Hospital (Korea) and OneMedNet (U.S.) data: Established by three radiation oncologists with 3-20 years of clinical practice following RTOG and clinical guidelines using manual annotation. The process involved initial manual delineation by one radiation oncologist, followed by sequential editing and confirmation by two other radiation oncologists.
    • For University Hospital Basel (Europe) data (TotalSegmentator dataset): This is public data where ground truth was established by manual segmentation and refinement supervised by two physicians with 3 and 6 years of experience in body imaging.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250035
    Manufacturer
    Date Cleared
    2025-02-03

    (27 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Trained medical professionals use Contour ProtégéAl as a tool to assist in the automated processing of digital medical images of modalities CT and MR, as supported by ACR/NEMA DICOM 3.0. In addition, Contour ProtégéAl supports the following indications:

    · Creation of contours using machine-learning algorithms for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.

    · Segmenting anatomical structures across a variety of CT anatomical locations.

    · And segmenting the prostate, the seminal vesicles, and the urethra within T2-weighted MR images.

    Appropriate image visualization software must be used to review and, if necessary, edit results automatically generated by Contour ProtégéAI.

    Device Description

    Contour ProtégéAl+ is an accessory to MIM software that automatically creates contours on medical images through the use of machine-learning algorithms. It is designed for use in the processing of medical images and operates on Windows, Mac, and Linux computer systems. Contour ProtégéAl+ is deployed on a remote server using the MIMcloud service for data management and transfer; or locally on the workstation or server running MIM software.

    AI/ML Overview

    Here's a breakdown of Contour ProtégéAI+'s acceptance criteria and study information, based on the provided text:

    Acceptance Criteria and Device Performance

    The acceptance criteria for each structure's inclusion in the final models were a combination of statistical tests and user evaluation:

    Acceptance CriteriaReported Device Performance (Contour ProtégéAI+)
    Statistical non-inferiority of the Dice score compared with the reference predicate (MIM Atlas).For most structures, the Contour ProtégéAI+ Dice score mean and 95th percentile confidence bound were equivalent to or better than the MIM Atlas. Equivalence was defined as the lower 95th percentile confidence bound of Contour ProtégéAI+ being greater than 0.1 Dice lower than the mean MIM Atlas performance. Results are shown in Table 2, with '*' indicating demonstrated equivalence.
    Statistical non-inferiority of the Mean Distance Accuracy (MDA) score compared with the reference predicate (MIM Atlas).For most structures, the Contour ProtégéAI+ MDA score mean and 95th percentile confidence bound were equivalent to or better than the MIM Atlas. Equivalence was defined as the lower 95th percentile confidence bound of Contour ProtégéAI+ being greater than 0.1 Dice lower than the mean MIM Atlas performance. Results are shown in Table 2, with '*' indicating demonstrated equivalence.
    Average user evaluation of 2 or higher (on a three-point scale: 1=negligible, 2=moderate, 3=significant time savings).The "External Evaluation Score" (Table 2) consistently shows scores of 2 or higher across all listed structures, indicating moderate to significant time savings.
    (For models as a whole) Statistically non-inferior cumulative Added Path Loss (APL) compared to the reference predicate.For all 4.2.0 CT models (Thorax, Abdomen, Female Pelvis, SurePlan MRT), equivalence in cumulative APL was demonstrated (Table 3), with Contour ProtégéAI+ showing lower mean APL values than MIM Atlas.
    (For localization accuracy) No specific passing criterion, but results are included.Localization accuracy results (Table 4) are provided as percentages of images successfully localized for both "Relevant FOV" and "Whole Body CT," ranging from 77% to 100% depending on the structure and model.

    Note: Cells highlighted in orange in the original document indicate non-demonstrated equivalence (not reproducible in markdown), and cells marked with '**' indicate that equivalence was not demonstrated because the minimum sample size was not met for that contour.

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: The Contour ProtégéAI+ subject device was evaluated on a pool of 770 images.
      • Data Provenance: The images were gathered from 32 institutions. The verification data used for testing is from a set of institutions that are totally disjoint from the datasets used to train each model. Patient demographics for the testing data are: 53.4% female, 31.3% male, 15.3% unknown; 0.3% ages 0-20, 4.7% ages 20-40, 20.9% ages 40-60, 50.0% ages 60+, 24.1% unknown; varying scanner manufacturers (GE, Siemens, Phillips, Toshiba, unknown). The data is retrospective, originating from clinical treatment plans according to the training set description.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document implies that the ground truth for the test set was validated against "original ground-truth contours" when measuring Dice and MDA against MIM Maestro. However, the expert qualifications are explicitly stated for the training set ground truth, which often implies a similar standard for the test set.
      • Ground truth (for training/re-segmentation) was established by:
        • Consultants (physicians and dosimetrists) specifically for this purpose, outside of clinical practice.
        • Initial segmentations were reviewed and corrected by radiation oncologists.
        • Final review and correction by qualified staff at MIM Software (MD or licensed dosimetrists).
        • All segmenters and reviewers were instructed to ensure the highest quality training data according to relevant published contouring guidelines.
    3. Adjudication method for the test set:

      • The document doesn't explicitly describe a specific adjudication method like "2+1" or "3+1" for the test set ground truth. However, it does state that "Detailed instructions derived from relevant published contouring guidelines were prepared for the dosimetrists. The initial segmentations were then reviewed and corrected by radiation oncologists against the same standards and guidelines. Qualified staff at MIM Software (MD or licensed dosimetrists) then performed a final review and correction." This process implies a multi-expert review and correction process to establish the ground truth used for both training and evaluation, ensuring a high standard of accuracy.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • A direct MRMC comparative effectiveness study measuring human readers' improvement with AI versus without AI assistance (i.e., human-in-the-loop performance) is not explicitly described in terms of effect size.
      • Instead, the study evaluates the standalone performance of the AI device (Contour ProtégéAI+) against a reference device (MIM Maestro atlas segmentation) and user evaluation of time savings.
      • The "Average user evaluation of 2 or higher" on a three-point scale (1=negligible, 2=moderate, 3=significant time savings) provides qualitative evidence of perceived improvement in workflow rather than a quantitative measure of diagnostic accuracy improvement due to AI assistance. "Preliminary user evaluation conducted as part of testing demonstrated that Contour ProtégéAI+ yields comparable time-saving functionality when creating contours as other commercially available automatic segmentation products."
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance evaluation was conducted. The primary comparisons for Dice score, MDA, and cumulative APL are between the Contour ProtégéAI+ algorithm's output and the ground truth, benchmarked against the predicate device's (MIM Maestro atlas segmentation) standalone performance. The results in Table 2 and Table 3 directly show the algorithm's performance.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Expert Consensus Contour (and review): The ground truth was established by expert re-segmentation of images (by consultants, physicians, and dosimetrists) specifically for this purpose, reviewed and corrected by radiation oncologists, and then subjected to a final review and correction by qualified MIM Software staff (MD or licensed dosimetrists). This indicates a robust expert consensus process based on established clinical guidelines.
    7. The sample size for the training set:

      • The document states that the CT images for the "training set were obtained from clinical treatment plans for patients prescribed external beam or molecular radiotherapy". However, it does not provide a specific numerical sample size for the training set, only for the test set (770 images). It only mentions being "re-segmented by consultants... specifically for this purpose".
    8. How the ground truth for the training set was established:

      • The ground truth for the training set was established through a multi-step expert process:
        • CT images from clinical treatment plans were re-segmented by consultants (physicians and dosimetrists), explicitly for the purpose of creating training data, outside of clinical practice.
        • Detailed instructions from relevant published contouring guidelines were provided to the dosimetrists.
        • Initial segmentations were reviewed and corrected by radiation oncologists against the same standards and guidelines.
        • A final review and correction was performed by qualified staff at MIM Software (MD or licensed dosimetrists).
        • All experts were instructed to spend additional time to ensure the highest quality training data, contouring all specified OAR structures on all images according to referenced standards.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242729
    Manufacturer
    Date Cleared
    2024-12-09

    (90 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning.

    Device Description

    As with AutoContour Model RADAC V3, the AutoContour Model RADAC V4 device is software that uses DICOM-compliant image data (CT or MR) as input to: (1) automatically contour various structures of interest for radiation therapy treatment planning using machine learning based contouring. The deep-learning based structure models are trained using imaging datasets consisting of anatomical organs of the head and neck, thorax, abdomen and pelvis for adult male and female patients, (2) allow the user to review and modify the resulting contours, and (3) generate DICOM-compliant structure set data the can be imported into a radiation therapy treatment planning system.

    AutoContour Model RADAC V4 consists of 3 main components:

      1. A .NET client application designed to run on the Windows Operating System allowing the user to load image and structure sets for upload to the cloud-based server for automatic contouring, perform registration with other image sets, as well as review, edit, and export the structure set.
      1. A local "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT and MR datasets that are to be automatically contoured.
      1. A cloud-based automatic contouring service that produces initial contours based on image sets sent by the user from the .NET client application.
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study findings for the Radformation AutoContour (Model RADAC V4) device, based on the provided text:

    1. Acceptance Criteria and Reported Device Performance

    The primary acceptance criterion for the automated contouring models is the Dice Similarity Coefficient (DSC), which measures the spatial overlap between the AI-generated contour and the ground truth contour. The criteria vary based on the estimated size of the anatomical structure. Additionally, for external clinical testing, an external reviewer rating was used to assess clinical appropriateness.

    Acceptance Criteria CategoryMetric (for AI performance)Performance Criteria (for AI performance)Reported Device Performance (Mean ± Std Dev) for CT ModelsReported Device Performance (Mean ± Std Dev) for MR ModelsReported Device Performance (Mean External Reviewer Rating 1-5, higher is better)
    Contouring Accuracy (CT Models)Mean Dice Similarity Coefficient (DSC)Large Volume Structures: ≥ 0.80.92 ± 0.060.96 ± 0.03N/A
    Medium Volume Structures: ≥ 0.650.85 ± 0.090.84 ± 0.07N/A
    Small Volume Structures: ≥ 0.50.81 ± 0.120.74 ± 0.09N/A
    Clinical Appropriateness (CT Models)External Reviewer Rating (1-5 scale)Average Score ≥ 3N/AN/A4.57 (across all CT models)
    Contouring Accuracy (MR Models)Mean Dice Similarity Coefficient (DSC)Large Volume Structures: ≥ 0.8N/A0.96 ± 0.03 (training data)
    0.80 ± 0.09 (external data)N/A
    Medium Volume Structures: ≥ 0.65N/A0.84 ± 0.07 (training data)
    0.84 ± 0.09 (external data)N/A
    Small Volume Structures: ≥ 0.5N/A0.74 ± 0.09 (training data)
    0.61 ± 0.14 (external data)N/A
    Clinical Appropriateness (MR Models)External Reviewer Rating (1-5 scale)Average Score ≥ 3N/AN/A4.6 (across all MR models)

    2. Sample Size Used for the Test Set and Data Provenance

    • CT Models Test Set:

      • Sample Size: For individual CT structure models, the number of testing sets ranged from 10 to 116 for the internal validation (Table 4) and 13 to 82 for the external clinical testing (Table 6). The document states "approximately 10% of the number of training image sets" were used for testing in the internal validation, with an average of 54 testing image sets per CT structure model.
      • Data Provenance: Imaging data for training was gathered from 4 institutions in 2 different countries (United States and Switzerland). External clinical testing data for CT models was sourced from various TCIA (The Cancer Imaging Archive) datasets (Pelvic-Ref, Head-Neck-PET-CT, Pancreas-CT-CB, NSCLC, LCTSC, QIN-BREAST) and shared from several unidentified institutions in the United States. Data was retrospective, as it was acquired and then used for model validation.
    • MR Models Test Set:

      • Sample Size: For individual MR structure models, the number of testing sets ranged from 45 for internal validation (Table 8) and 5 to 45 for external clinical testing (Table 10). The document states an average of 45 testing image sets per MR Brain model and 77 testing image sets per MR Pelvis model were used for internal validation.
      • Data Provenance: Imaging data for training and internal testing was acquired from the Cancer Imaging Archive GLIS-RT dataset (for Brain models) and two open-source datasets plus one institution in the United States (for Pelvis models). External clinical testing data for MR models was from a clinical partner (for Brain models), two publicly available datasets (Prostate-MRI-U-S-Biopsy, Gold Atlas Pelvis, SynthRad), and two institutions utilizing MR Linacs for image acquisitions. Data was retrospective.
    • General Note: Test datasets were independent from those used for training.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three (3) experts were used.
    • Qualifications of Experts: The ground truth was established by three clinically experienced experts consisting of 2 radiation therapy physicists and 1 radiation dosimetrist.

    4. Adjudication Method for the Test Set

    • Method: Ground truthing of each test data set was generated manually using consensus (NRG/RTOG) guidelines as appropriate by the three experts. This implies an expert consensus method, likely involving discussion and agreement among the three. The document does not specify a quantitative adjudication method like "2+1" or "3+1" but rather a "consensus" guided by established clinical guidelines.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • The document does not report an MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance. The study focuses purely on the AI's performance and its clinical appropriateness as rated by external reviewers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Yes, a standalone performance evaluation was done. The core of the performance data presented (Dice Similarity Coefficient) is a measure of the algorithm's direct output compared to the ground truth, without a human in the loop during the contour generation phase. The external reviewer ratings also assess the standalone performance of the AI-generated contours regarding their clinical utility for subsequent editing and approval.

    7. The Type of Ground Truth Used

    • Type: The ground truth used was expert consensus, specifically from three clinically experienced experts (2 radiation therapy physicists and 1 radiation dosimetrist), guided by NRG/RTOG guidelines.

    8. The Sample Size for the Training Set

    • CT Models Training Set: For CT structure models, there was an average of 341 training image sets.
    • MR Models Training Set: For MR Brain models, there was an average of 149 training image sets. For MR Pelvis models, there was an average of 306 training image sets.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the deep-learning based structure models were "trained using imaging datasets consisting of anatomical organs" and that the "test datasets were independent from those used for training." While it extensively details how ground truth was established for the test sets (manual generation by three experts using consensus and NRG/RTOG guidelines), it does not explicitly describe how the ground truth for the training sets was established. However, given the nature of deep learning models for medical image segmentation, it is highly probable that the training data also had meticulously generated, expert-annotated ground truth contours, likely following similar rigorous processes as the test sets, potentially from various institutions or public datasets. The consistency of the model architecture and training methodologies (e.g., "very similar CNN architecture was used to train these new CT models") suggests a standardized approach to data preparation, including ground truth generation, for both training and testing.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241490
    Manufacturer
    Date Cleared
    2024-10-18

    (147 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Contour+ (MVision Al Segmentation) is a software system for image analysis algorithms to be used in radiation therapy treatment planning workflows. The system includes processing tools for automatic contouring of CT and MR images using machine learning based algorithms. The produced segmentation templates for regions of interest must be transferred to appropriate image visualization systems as an initial template for a medical professional to visualize, review, modify and approve prior to further use in clinical workflows.

    The system creates initial contours of pre-defined structures of common anatomical sites, i.e., Head and Neck, Brain, Breast, Lung and Abdomen, Male Pelvis, and Female Pelvis.

    Contour+ (MVision Al Segmentation) is not intended to detect lesions or tumors. The device is not intended for use with real-time adaptive planning workflows.

    Device Description

    Contour+ (MVision Al Segmentation) is a software-only medical device (software system) that can be used to accelerate region of interest (ROI) delineation in radiotherapy treatment planning by automatic contouring of predefined ROIs and the creation of segmentation templates on CT and MR images.

    The Contour+ (MVision Al Segmentation) software system is integrated with a customer IT network and configured to receive DICOM CT and MR images, e.g., from a CT or MRI scanner or a treatment planning system (TPS). Automatic contouring of predefined ROIs is performed by pre-trained, locked, and static models that are based on machine learning using deep artificial neural networks. The models have been trained on several anatomical sites, including the brain, head and neck, bones, breast, lung and abdomen, male pelvis, and female pelvis using hundreds of scans from a diverse patient population. The user does not have to provide any contouring atlases. The resulting segmentation structure set is connected to the original DICOM images and can be transferred to an image visualization system (e.g., a TPS) as an initial template for a medical professional to visualize, modify and approve prior to further use in clinical workflows.

    AI/ML Overview

    The provided text does not include a table of acceptance criteria and the reported device performance, nor does it specify the sample sizes used for the test set, the number of experts for ground truth, or details on comparative effectiveness studies (MRMC).

    However, based on the available information, here is a description of the acceptance criteria and study details:

    Acceptance Criteria and Study for Contour+ (MVision AI Segmentation)

    The study evaluated the performance of automatic segmentation models by comparing them to ground truth segmentations using Dice Score (DSC) and Surface-Dice Score (S-DSC@2mm) as metrics. The acceptance criteria were based on a "set level of minimum agreement against ground truth segmentations determined through clinically relevant similarity metrics DSC and S-DSC@2mm." While specific numerical thresholds for these metrics are not provided, the submission states that the device fulfills "the same acceptance criteria" as the predicate device.

    It's important to note that the provided document is an FDA 510(k) clearance letter and not the full study report. As such, it summarizes the findings and affirms the device's substantial equivalence without detailing every specific test result or acceptance threshold.


    1. A table of acceptance criteria and the reported device performance

    MetricAcceptance CriteriaReported Device Performance
    Dice Score (DSC)Based on a "set level of minimum agreement against ground truth segmentations" (specific thresholds not provided)."Performance verification and validation results for various subsets of the golden dataset show the generalizability and robustness of the device..."
    Surface-Dice Score (S-DSC@2mm)Based on a "set level of minimum agreement against ground truth segmentations" (specific thresholds not provided)."...Contour+ (MVision AI Segmentation) fulfills the same acceptance criteria, provides the intended benefits, and it is as safe and as effective as the predicate software version."

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The exact sample size for the test (golden) dataset is not specified, but it's referred to as "various subsets of the golden dataset" and chosen to "achieve high granularity in performance evaluation tests."
    • Data Provenance: The datasets originate from "multiple EU and US clinical sites (with over 50% of data coming from US sites)." It is described as containing "hundreds of scans from a diverse patient population," ensuring representation of the "US population and medical practice." The text does not explicitly state if the data was retrospective or prospective, but the description of "hundreds of scans" from multiple sites suggests it is likely retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The number of experts used to establish the ground truth for the test set is not specified in the provided text. The qualifications are vaguely mentioned as "radiotherapy experts" who performed "Performance validation of machine learning-based algorithms for automatic segmentation." No specific years of experience or board certifications are detailed.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The adjudication method for establishing ground truth on the test set is not specified in the provided text. The text only states that the auto-segmentations were compared to "ground truth segmentations."


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study focusing on the improvement of human readers with AI assistance versus without AI assistance is not explicitly described in the provided text.

    The text states: "Performance validation of machine learning-based algorithms for automatic segmentation was also carried out by radiotherapy experts. The results show that Contour+ (MVision AI Segmentation) assists in reducing the upfront effort and time required for contouring CT and MR images, which can instead be devoted by clinicians on refining and reviewing the software-generated contours." This indicates that experts reviewed the output and perceived a benefit in efficiency, but it does not detail a formal MRMC study comparing accuracy or time, with a specific effect size.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the algorithm was conducted. The primary performance metrics (DSC and S-DSC@2mm) were calculated by directly comparing the "produced auto-segmentations to ground truth segmentations," which is a standalone assessment of the algorithm's output. The statement "Performance verification and validation results for various subsets of the golden dataset show the generalizability and robustness of the device" further supports this.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used was expert consensus segmentations. The text repeatedly refers to comparing the device's output to "ground truth segmentations" established by "radiotherapy experts." There is no mention of pathology or outcomes data being used for ground truth.


    8. The sample size for the training set

    The exact sample size for the training set is not specified, but the models were "trained on several anatomical sites... using hundreds of scans from a diverse patient population."


    9. How the ground truth for the training set was established

    The text states that the machine learning models were "trained on several anatomical sites... using hundreds of scans from a diverse patient population." While it doesn't explicitly detail the process for establishing ground truth for the training set, it is implied to be through expert contouring/segmentation, as the validation uses "ground truth segmentations" which are established by "radiotherapy experts." Given the extensive training data required for machine learning, it's highly probable that these "hundreds of scans" also had expert-derived segmentations as their ground truth for training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241837
    Device Name
    Limbus Contour
    Manufacturer
    Date Cleared
    2024-10-09

    (106 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Limbus Contour is a software-only medical device intended for use by trained radiation oncologists, dosimetrists and physicists to derive optimal contours for input to radiation treatment planning.

    Supported image modalities are Computed Tomography and Magnetic Resonance. The Limbus Contour Software assists in the following scenarios:

    Operates in conjunction with radiation treatment planning systems or DICOM viewing systems to load, save, and display medical images and contours for treatment evaluation and treatment planning.

    Creation, transformation, and modification of contours for applications including, but not limited to: transferring contours to radiotherapy treatment planning systems, aiding adaptive therapy and archiving contours for patient follow-up.

    Localization and definition of healthy anatomical structures.

    Limbus Contour is not intended for use with digital mammography.

    Device Description

    Limbus Contour is a stand-alone software medical device. It is a single purposes cross-platform application for automatic contouring (segmentation) of CT/MRI DICOM images via pre-trained and expert curated machine learning models. The software is intended to be used by trained medical professionals to derive contours for input to radiation treatment planning. The Limbus Contour software segments normal tissues using machine learning models and further post-processing on machine learning model prediction outputs. Limbus Contour does not display or store DICOM images and relies on existing radiotherapy treatment planning systems (TPS) and DICOM image viewers for display and modification of generated segmentations. Limbus Contour interfaces with the user's operating system (importing DICOM image .dcm files and exporting segmented DICOM RT-Structure Set .dcm files).

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Limbus Contour device, based on the provided FDA 510(k) submission information:


    1. Acceptance Criteria and Reported Device Performance

    The acceptance criterion for each contoured structure is that the Limbus DSC (Dice Similarity Coefficient) lower 95% confidence edge must be greater than or equal to the "Test DSC Threshold," which is derived from the mean minus the standard deviation of reference model DSCs from published machine learning autosegmentation models.

    StructureLimbus Mean DSCLimbus DSC Std DevNumber of ScansLimbus DSC lower 95% conf edgeTest DSC ThresholdResult
    A_Aorta0.9090950.0455771100.876493370.81Passed
    A_Aorta_Base0.9795880.0286193100.959116410.81Passed
    A_Aorta_I0.9380160.10304303100.864308580.81Passed
    A_Celiac0.7815020.27272084100.586422820.26Passed
    A_LAD0.6927660.06590144100.645626220.26Passed
    A_Mesenteric_S0.8572570.14185425100.755787630.26Passed
    A_Pulmonary0.9018670.03499015100.876838290.85Passed
    Applicator_Cylinder (beta)0.801117330.33037573150.608162740.374Passed
    Applicator_Ring (beta)0.9638630.07306595100.91159840.374Passed
    Atrium_L0.9770440.0180652100.964121830.79Passed
    Atrium_R0.9784510.01852677100.965198670.78Passed
    Bladder0.966012380.05220935210.940241380.935Passed
    Bladder (MRI)0.9635180.01177413100.955095880.88Passed
    Bladder_CBCT0.9591730.04229406100.928919750.91Passed
    Bladder_HDR0.9311670.06094679100.887571320.56674243Passed
    Bladder_HDR (MRI)0.8968830.13171833100.802663930.79Passed
    Bone Marrow_Pelvic0.9954140.00407252100.99250090.805Passed
    Bone_Hyoid0.854114170.04163051120.826930150.77Passed
    Bone_Illium_L0.98880750.01103973120.981598740.76Passed
    Bone_Illium_R0.990588330.00575056120.986833320.76Passed
    Bone_Ischium_L0.9389850.01573502100.927729630.76Passed
    Bone_Ischium_R0.939230.01613541100.927688220.76Passed
    Bone_Mandible0.940247690.01266685130.932300940.922Passed
    Bone_Pelvic0.983830.00511637100.980170220.929Passed
    Bowel0.907432170.06592406230.876338460.74Passed
    Bowel_Bag0.939794780.03659061230.922536470.752Passed
    Bowel_Bag_Extend0.9715760.01582803200.96357020.752Passed
    Bowel_Bag_Full0.96799850.01354157200.96114920.752Passed
    Bowel_Bag_Superior0.936864550.09357214110.87304660.752Passed
    Bowel_Extend0.9416820.03040818200.926301590.74Passed
    Bowel_Full0.923513810.03444493210.906511480.74Passed
    Bowel_HDR0.8413680.05558462100.801607920.2008343Passed
    Bowel_HDR (MRI)0.558318180.24969816110.388019370.31Passed
    Bowel_Superior0.902142730.03756911110.876519890.74Passed
    BrachialPlex_L0.6916050.10786794100.614446280.39Passed
    BrachialPlex_R0.6938090.11005989100.615082370.39Passed
    Brain0.9922050.00251205160.990784440.988Passed
    Brainstem0.903346880.03859191160.881523150.695Passed
    Brainstem (MRI)0.9255260.02877815100.904940780.725Passed
    Breast_Implant_L0.9928840.00662727100.988143460.865Passed
    Breast_Implant_R0.9736630.03491225100.948690010.865Passed
    Breast_L0.9545140.02763163100.93474890.726Passed
    Breast_R0.939520910.04383671110.909623450.7345Passed
    Bronchus0.8395150.06515951100.792905930.76Passed
    CW2cm_L0.9989550.00118886100.99810460.72Passed
    CW2cm_R0.9993760.00101477100.998650130.72Passed
    Canal_Anal0.875960950.13633659210.8086640.803Passed
    Canal_Anal_HDR0.9428910.04773688100.908744460.56167132Passed
    Canal_Anal_HDR (MRI)0.6102950.35031087100.359715110.31Passed
    Carina101010.77Passed
    CaudaEquina0.8820980.06633305100.834649490.722Passed
    Cavity_Oral0.9131130.0386665100.885454580.8Passed
    Cerebellum0.9832190.01399611100.973207480.83Passed
    Chestwall_L0.959070910.00299448110.957028620.72Passed
    Chestwall_R0.959571820.00327572110.957337720.72Passed
    Clavicle_L0.980143750.01256694160.973037150.93Passed
    Clavicle_R0.9815650.01013648160.975832820.93Passed
    Cochlea_L0.7023110.10183115100.629470450.533Passed
    Cochlea_R0.6867580.14712802100.581516270.545Passed
    Colon_Sigmoid0.816253810.15924956210.737646810.704Passed
    Colon_Sigmoid_HDR0.8655050.12156688100.778547340.30928644Passed
    Colon_Sigmoid_HDR (MRI)0.7530360.15966944100.63882330.47Passed
    Cornea_L0.961831820.06990272110.914156860.489Passed
    Cornea_L (MRI)0.9137180.03513108100.888588480.489Passed
    Cornea_R0.969347270.05299966110.933200510.498Passed
    Cornea_R (MRI)0.9272230.02302511100.910752970.498Passed
    Duodenum0.8284330.18461132100.696379190.649Passed
    ESTRO_LN_Ax_IP_L0.9845520.0225043100.968454510.79Passed
    ESTRO_LN_Ax_IP_R0.9888010.01830441100.975707730.796Passed
    ESTRO_LN_Ax_L1_L0.9971220.00681545100.992246860.66Passed
    ESTRO_LN_Ax_L1_R0.9675030.01693458100.955389570.66Passed
    ESTRO_LN_Ax_L2+IP_Fill_L0.9929860.01314147100.983585820.73Passed
    ESTRO_LN_Ax_L2+IP_Fill_R0.9942060.01093726100.98638250.73Passed
    ESTRO_LN_Ax_L2_L0.9951920.01077199100.987486720.73Passed
    ESTRO_LN_Ax_L2_R0.9973520.00448639100.994142860.73Passed
    ESTRO_LN_Ax_L3_L0.9933820.00884358100.987056120.51Passed
    ESTRO_LN_Ax_L3_R0.9921490.01468854100.981642180.51Passed
    ESTRO_LN_IMN_L0.9805970.02745317100.960959550.39Passed
    ESTRO_LN_IMN_L_Expand0.9820790.05662552100.941574360.39Passed
    ESTRO_LN_IMN_R0.9744020.04214952100.944252140.39Passed
    ESTRO_LN_IMN_R_Expand0.9778520.06968747100.928004050.39Passed
    ESTRO_LN_Sclav_L0.975860.03136266100.953426060.7Passed
    ESTRO_LN_Sclav_R0.987350.02268151100.971125750.7Passed
    Esophagus0.837410830.02651585120.820096430.67Passed
    Eye_L0.935118240.03324782170.916877960.894Passed
    Eye_L (MRI)0.9503370.01463563100.939868030.847Passed
    Eye_R0.941917060.03257919170.924043610.902Passed
    Eye_R (MRI)0.9396660.03581356100.91404830.849Passed
    Femur_Head_L0.9612990.00888921100.954940490.93Passed
    Femur_Head_L (MRI)0.9381620.04781144100.903962140.77Passed
    Femur_Head_L_CBCT0.9779390.01378171100.968080850.88Passed
    Femur_Head_R0.9613810.01105991100.953469760.937Passed
    Femur_Head_R (MRI)0.9485860.02852155100.928184330.77Passed
    Femur_Head_R_CBCT0.9896670.01081208100.981933040.88Passed
    Gallbladder0.9464220.05882969100.90434070.809Passed
    GInd Lacrimal L0.765745380.0785035130.716494970.489Passed
    Glnd_Lacrimal_R0.734740770.09508335130.675088720.498Passed
    Glnd_Submand_L0.8381830.08845188100.774912730.725Passed
    Glnd_Submand_R0.8826720.0245712100.865096050.595Passed
    Glnd_Thyroid0.8405750.03434333100.816008970.716Passed
    GreatVes0.9562810.01660489100.94440340.81Passed
    Heart0.954888330.02805647120.936567930.89Passed
    Heart+A_Pulm0.9956630.01079707100.987939770.89Passed
    Hippocampus_L0.8974740.14363431100.794731350.45Passed
    Hippocampus_L (MRI)0.8010920.07687695100.746101360.618Passed
    Hippocampus_R0.8419330.23470004100.674050370.45Passed
    Hippocampus_R (MRI)0.8042290.07348396100.751665390.618Passed
    Humerus_L0.9815920.03366976100.957507780.93Passed
    Humerus_R0.9838040.02794829100.963812390.93Passed
    InternalAuditoryCanal_L0.7196630.27119782100.525673250.41Passed
    InternalAuditoryCanal_R0.7783020.29907667100.56437030.41Passed
    Kidney_L0.972110.0055787100.968119510.83Passed
    Kidney_R0.9712350.00508737100.967595970.85Passed
    LN_Ax_L1_L0.933470.03827463100.906091880.66Passed
    LN_Ax_L1_R0.9573660.01855924100.944090440.66Passed
    LN_Ax_L2_L0.7978470.03448156100.773182090.73Passed
    LN_Ax_L2_R0.8366890.03793359100.809554830.73Passed
    LN_Ax_L3_L0.8414690.02407574100.824247450.51Passed
    LN_Ax_L3_R0.8332020.05413932100.794475760.51Passed
    LN_Ax_Sclav_L0.8548590.07708553100.799719170.66Passed
    LN_Ax_Sclav_R0.8393540.0636715100.793809320.66Passed
    LN_IMN_L0.6810720.05716488100.640181550.39Passed
    LN_IMN_L_Expand0.9741580.08171958100.91570340.39Passed
    LN_IMN_R0.7546240.0588019100.712562580.39Passed
    LN_IMN_R_Expand0.9692350.09728747100.899644570.39Passed
    LN_Inguinal_L0.9877520.01196273100.979194970.779Passed
    LN_Inguinal_R0.9758560.01828094100.962779510.779Passed
    LN_Neck_IA0.880388180.10469436110.808984670.41Passed
    LN_Neck_IA60.945973640.03537206110.921849230.896Passed
    LN_Neck_IB_L0.9185530.02691603100.899299770.896Passed
    LN_Neck_IB_R0.9162480.01954066100.902270420.896Passed
    LN_Neck_III_L0.9243770.02716647100.904944630.752Passed
    LN_Neck_III_R0.9038050.03651978100.877682140.775Passed
    LN_Neck_II_L0.9214250.02096226100.906430540.894Passed
    LN_Neck_II_R0.9199180.02031001100.90539010.894Passed
    LN_Neck_IV_L0.8370670.10669372100.760748210.655Passed
    LN_Neck_IV_R0.8134740.07643769100.758797570.655Passed
    LN_Neck_L0.868750.04264226120.840905320.779Passed
    LN_Neck_R0.868550.04499896120.839166430.779Passed
    LN_Neck_VI0.938220830.07273804120.890724120.722Passed
    LN_Neck_VIIAB_L0.7045620.14161814100.603261530.55Passed
    LN_Neck_VIIAB_R0.6840870.15673354100.571974370.55Passed
    LN_Neck_VIIA_L0.9736970.03639132100.947666030.54Passed
    LN_Neck_VIIA_R0.9630450.0518865100.925930220.54Passed
    LN_Neck_VIIB_L0.9794430.02540021100.961274050.69Passed
    LN_Neck_VIIB_R0.97270.02326651100.95605730.71Passed
    LN_Neck_V_L0.8996680.05719485100.858756110.785Passed
    LN_Neck_V_R0.8551860.05671539100.814617070.775Passed
    LN_Pelvics0.901693180.05091482220.8771390.779Passed
    LN_Pelvics_CBCT0.9747420.04202674100.944679970.58Passed
    LN_Sclav_L0.960930.05712461100.920068350.7Passed
    LN_Sclav_R0.9584980.02948648100.937406110.7Passed
    Larynx0.8987770.05827018100.857095920.77Passed
    Lens_L0.782924710.08119035170.738382410.616Passed
    Lens_R0.760474710.07902615170.717119730.449Passed
    Lips0.8246960.14948194100.717770490.68Passed
    Liver0.977733850.01147248130.97053640.92Passed
    Lobe_Temporal_L0.9447440.07569022100.890602240.83Passed
    Lobe_Temporal_R0.9484560.06837365100.899547840.83Passed
    Lung_L0.9831150.00654768100.97843140.96Passed
    Lung_R0.9836490.00652109100.978984410.96Passed
    Mesorectum0.8279650.05209883100.790698330.779Passed
    Musc_Constrict0.8690970.05737849100.828053760.61Passed
    Musc_PecMinor_L0.8692590.04744788100.835319190.79Passed
    Musc_PecMinor_R0.8635840.06177418100.819396490.796Passed
    Musc_Sclmast_L0.9461170.02773018100.92628140.803Passed
    Musc_Sclmast_R0.9452910.03302699100.921666560.803Passed
    OpticChiasm0.659298820.1679447170.567161740.41Passed
    OpticNrv_L0.825769410.06203798170.791734410.73Passed
    OpticNrv_R0.828942940.06130553170.795309770.72Passed
    Optics (MRI)0.7648460.05410538100.726144030.504Passed
    Pancreas0.8843430.09900704100.813522550.769Passed
    Parotid_L0.883520830.06794505120.839153860.778Passed
    Parotid_R0.882816670.05035732120.849934190.803Passed
    PelvisVessels0.9149980.02637213100.896133820.26Passed
    PenileBulb0.848508180.04605243110.817099560.705Passed
    PenileBulb (MRI)0.732310.27283179100.537151450.46Passed
    Pericardium0.9848280.0185493100.971559550.8688Passed
    Pericardium+A_Pulm0.9949730.01235765100.986133480.89Passed
    Pituitary0.750418670.15158537150.661885860.41Passed
    Prostate0.9340930.02193268100.918404390.88Passed
    Prostate (MRI)0.9151640.03096645100.893013480.8Passed
    ProstateBed0.746913330.1454049150.66199020.5Passed
    ProstateFiducials (beta)0.614220.24931989100.435879680.41Passed
    Prostate_CBCT0.9612690.04241179100.930931540.79Passed
    PubicSymphys0.9437430.02100908100.928715050.76Passed
    PubicSymphys (MRI)0.7795850.11210947100.699392290.54Passed
    Rectum0.886817620.08654191210.844099760.803Passed
    Rectum (MRI)0.9346190.02030278100.920096280.77Passed
    Rectum_CBCT0.9631030.02896341100.942385260.87Passed
    Rectum_HDR0.9185530.09535355100.850345920.56167132Passed
    Rectum_HDR (MRI)0.7810.15188698100.672354150.58Passed
    Retina_L0.907613640.1929304110.77603150.489Passed
    Retina_L (MRI)0.9532710.04079533100.924089810.489Passed
    Retina_R0.911916360.1905022110.781990310.498Passed
    Retina_R (MRI)0.928540.05520214100.889053510.498Passed
    Ribs_L0.944735450.00563264110.940893890.81Passed
    Ribs_R0.946216360.00495204110.942838980.81Passed
    Sacrum0.970124380.01642714160.960834830.82Passed
    Sacrum (MRI)0.9666320.04786265100.93239550.77Passed
    SeminalVes0.821480.16089309100.706392020.5Passed
    SeminalVes (MRI)0.8339950.05384245100.795481110.39Passed
    SeminalVes_CBCT0.9046530.05295257100.866775650.621Passed
    SpaceOARVue (beta)0.8669340.0421535100.83678130.5Passed
    SpinalCanal0.89717650.06232767200.865651250.722Passed
    SpinalCord0.877886790.06353613280.85072650.722Passed
    Spleen0.982384290.00724712140.978003070.958Passed
    Sternum0.9685060.00927831100.961869160.8Passed
    Stomach0.923538180.04262548110.894466810.64Passed
    Trachea0.9001950.04891354100.865206790.77Passed
    Urethra_HDR0.688980.26984128100.495960590.26Passed
    Urethra_HDR (MRI)0.5584330.30664371100.339088550.26Passed
    Uterus+Cervix0.9238760.07561527100.869787850.8525Passed
    VB_C10.8711190.09654266100.802061340.389Passed
    VB_C20.8904650.08375338100.830555610.389Passed
    VB_C30.8829840.07768781100.827413350.389Passed
    VB_C40.8476070.13488571100.751122280.389Passed
    VB_C50.7358880.25082081100.556474070.389Passed
    VB_C60.6623130.36427326100.401745710.389Passed
    VB_C70.6867720.36426675100.426209370.389Passed
    VB_L10.73970.36842487120.499124770.389Passed
    VB_L20.853538180.3119903110.640754980.389Passed
    VB_L30.891392730.29651247110.689165690.389Passed
    VB_L40.889308180.29491517110.688170530.389Passed
    VB_L50.9717610.01978041100.957611930.389Passed
    VB_T010.7494990.20357809100.603878130.389Passed
    VB_T020.9008610.10665309100.824571270.389Passed
    VB_T030.8468450.23384828100.679571640.389Passed
    VB_T040.8710650.16033768100.75637430.389Passed
    VB_T050.8681840.10527773100.792878080.389Passed
    VB_T060.8565860.1735606100.732436850.389Passed
    VB_T070.8952070.09111821100.830029490.389Passed
    VB_T080.909462730.09927246110.841757060.389Passed
    VB_T090.8952330.19379417100.756610630.389Passed
    VB_T100.851806920.25673522130.690739990.389Passed
    VB_T110.905435380.17253913130.797190220.389Passed
    VB_T120.734660770.38432697130.493547120.389Passed
    VBs0.9844480.01042268100.976992580.579Passed
    V_Venacava_l0.953667860.05427303140.920857370.72Passed
    V_Venacava_S0.8512190.0503676100.815190690.8Passed
    Vagina0.8973410.06081143100.853842150.665Passed
    Ventricle_L0.9511440.00689877100.946209260.9Passed
    Ventricle_R0.9807840.01685223100.968729480.8Passed
    Wire_Breast_L (beta)0.7502450.28619472100.545527850.39Passed
    Wire_Breast_R (beta)0.8960530.15011288100.788676180.39Passed

    All listed structures met their respective acceptance criteria by having their Limbus DSC lower 95% confidence edge exceed or meet the specified Test DSC Thresholds.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Test Set Sample Size: For each structure, a set of at least 10 patient scans was used for initial performance testing. Some structures had larger test sets, as indicated in the table (e.g., Bladder with 21 scans, Bowel with 23 scans, SpinalCord with 28 scans).
    • Data Provenance: The test scans were randomly selected from a total pool of patient scans that contained the relevant structure. This pool was selected to reflect the general population of patients receiving radiation treatments. The data provenance details are further described in the "Training and Validation Datasets" section for the training data, implying a similar origin for the test data (multiple clinical sites and countries).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The ground truth contours were from "multiple experts at multiple institutions." The exact number is not explicitly stated.
    • Qualifications of Experts: The ground truth contours were all reviewed by a "board certified radiation oncologist" to ensure consistency with established standards and guidelines for contouring and proper labeling.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe a specific adjudication method like "2+1" or "3+1" for the ground truth contours in the test set. It states that the ground truth contours were from "multiple experts at multiple institutions" and reviewed by "a board certified radiation oncologist." This implies a form of consensus or expert review process, though the specific protocol for resolving discrepancies (if any) is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being performed for this submission. The performance data is focused on the standalone algorithm's accuracy (Dice Similarity Coefficient) against expert-generated ground truth.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study was done. The "Automatic Contouring - Validation Test" as described is a benchtop performance test where the software's outputs are compared to expert-generated ground truth without human intervention in the contouring process of the device itself.

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus contours. These were human-generated contours reviewed by a board-certified radiation oncologist to ensure they conformed to clinical trial guidelines and established standards.

    8. The Sample Size for the Training Set

    The total number of unique scans included in training datasets exceeds 10,000 scans. The table provided in the document details the number of training and validation scans for each individual structure model, with training scan counts ranging from tens to over a thousand for each structure.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established through:

    • Human-generated contours from a variety of anonymized and pseudo-anonymized datasets.
    • These datasets were collected from publicly available clinical trials and the company's clinical and research partners.
    • The training dataset ground truth contours were reviewed and edited by in-house clinicians and radiation oncologists to ensure consistency with established standards and guidelines for contouring (e.g., RTOG 1106, RTOG 0848, EMBRACE II, DAHANCA, NRG, ESTRO, ACROP, EPTN).
    • To minimize bias, training data included scans from multiple clinical sites, countries (United States, Canada, United Kingdom, France, Germany, Italy, Netherlands, Switzerland, Australia, New Zealand, Singapore), and different makes/models of imaging devices (GE, Siemens, Phillips, Toshiba, Elekta).
    • The scans and ground truth contours were from the general patient population receiving radiotherapy, with no restrictions based on age, ethnicity, race, gender, or disease states.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232928
    Date Cleared
    2024-05-07

    (230 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DeepContour is a deep learning based medical imaging software that allows trained healthcare professionals to use DeepContour as a tool to automatically process CT images. In addition, DeepCoutour is suitable for the following conditions:

    1. Creation of contours using deep-learning algorithms , support quantitative analysis, organ HU distribution statistics, transfer contour files to TPS, and create management archives for patients.
    2. Analvze the anatomical structure at different anatomical positions.
    3. Rigid and elastic registration based on CT.
    4. 3D reconstruction, editing and other visual tools based on organ contours
    Device Description

    DeepContour is a deep learning based medical imaging software that allows trained healthcare professionals to use DeepContour as a tool to automatically process CT images. DeepContour contouring workflow supports CT input data and produces RTSTRUCT outputs. The organ segmentation can also be combined into templates, which can be customized by different hospitals according to their needs. DeepContour provides an interactive contouring application to edit and review the contours automatically generated by DeepContour.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the DeepContour (V1.0) device, based on the provided FDA 510(k) Summary:

    Acceptance Criteria and Reported Device Performance

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state "acceptance criteria" as a set of predefined quantitative thresholds the device must meet. Instead, the study's aim is to demonstrate that DeepContour's performance is equivalent to or better than the predicate devices. The primary metric used for this comparison is the Dice coefficient, and the implicit acceptance criterion is that DeepContour's performance is not significantly worse than the predicates.

    The equivalence definition is stated as: "the lower bound of 95th percentile confidence interval of the subject device segmentation is greater than 0.1 Dice lower than the mean of predicate device segmentation."

    Below is a table summarizing the reported Dice coefficients for DeepContour and the predicate devices for a selection of structures. It also includes the summary Average Symmetric Surface Distance (ASSD) comparison.

    Table 1: Acceptance Criteria (Implicit) and Reported Device Performance

    MetricImplicit Acceptance CriteriaDeepContour Reported Performance (Mean ± Std (95% CI Lower Bound))Predicate (AI-Rad CAI-Rad Companion Organs RT) Reported Performance (Mean ± Std)Predicate (Contour ProtégéAI) Reported Performance (Mean ± Std)
    Dice CoefficientLower 95th percentile CI of DeepContour segmentation > (Mean of Predicate Segmentation - 0.1 Dice)See "Clinical performance comparison" tables below for specific structures.See "Clinical performance comparison" tables below for specific structures.See "Clinical performance comparison" tables below for specific structures.
    ASSD (median)Median ASSD comparable to predicate devices.0.95 (95% CI: [0.85, 1.13])0.96 (95% CI: [0.84, 1.15])0.95 (95% CI: [0.86, 1.17])

    Table 5: Clinical performance comparison (Peking Union Medical College Hospital) - Selected Structures

    | Structure: | DeepContour | AI-Rad CAI-Rad
    Companion Organs RT
    (K221305) | Contour ProtégéAI
    (K223774) |
    |--------------------------|---------------------|----------------------------------------------------|--------------------------------|
    | Brain | 0.98±0.01(0.97) | 0.93±0.11 | 0.98 ± 0.01 |
    | BrainStem | 0.91±0.03(0.89) | 0.90±0.02 | 0.82 ± 0.09 |
    | Eye_L | 0.89±0.02(0.88) | 0.81±0.06 | 0.87 ± 0.06 |
    | Lung_L | 0.98±0.05(0.96) | 0.92±0.16 | 0.96 ± 0.02 |
    | Heart | 0.93±0.16(0.90) | 0.91±0.06 | 0.90 ± 0.07 |
    | Liver | 0.96±0.07(0.95) | 0.86±0.17 | 0.93 ± 0.07 |
    | Kidney_L | 0.92±0.03(0.91) | 0.82±0.13 | 0.92 ± 0.05 |
    | Pancreas | 0.86±0.01(0.86) | 0.87±0.03 | 0.45 ± 0.22 |
    | Bladder | 0.95±0.15(0.93) | 0.87±0.15 | 0.52 ± 0.19 |
    | Prostate | 0.87±0.02(0.85) | 0.74 ± 0.12 | 0.85 ± 0.06 |
    | SpinalCord | 0.93±0.01(0.92) | 0.66 ± 0.14 | 0.63±0.16 |

    Table 6: Clinical performance comparison (LCTSC American public datasets) - Selected Structures

    | Structure: | DeepContour | AI-Rad CAI-Rad
    Companion Organs RT
    (K221305) | Contour
    ProtégéAI
    (K223774) |
    |------------|-----------------|----------------------------------------------------|-----------------------------------|
    | SpinalCord | 0.92±0.02(0.91) | 0.64±0.13 | 0.62 ± 0.21 |
    | Lung L | 0.97±0.15(0.96) | 0.90±0.13 | 0.95 ± 0.05 |
    | Heart | 0.92±0.11(0.90) | 0.91±0.04 | 0.90 ± 0.04 |
    | Esophagus | 0.89±0.13(0.86) | 0.75±0.13 | 0.68 ± 0.19 |

    Table 7: Clinical performance comparison (Pancreas-CT American public datasets) - Selected Structures

    | Structure: | DeepContour | AI-Rad CAI-Rad
    Companion Organs RT
    (K221305) | Contour
    ProtégéAI
    (K223774) |
    |------------|-----------------|----------------------------------------------------|-----------------------------------|
    | Spleen | 0.90±0.05(0.88) | 0.91±0.12 | 0.89 ± 0.08 |
    | Pancreas | 0.85±0.03(0.83) | 0.84±0.02 | 0.43 ± 0.25 |
    | Kidney_L | 0.93±0.02(0.91) | 0.84±0.03 | 0.92 ± 0.17 |
    | Liver | 0.97±0.03(0.97) | 0.85±0.13 | 0.92 ± 0.06 |
    | Stomach | 0.85±0.02(0.84) | 0.80±0.05 | 0.81 ± 0.17 |


    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: 203 CT images.
      • 100 clinical datasets
      • 103 American public datasets (60 from LCTSC, 43 from Pancreas-CT)
    • Data Provenance:
      • 100 clinical datasets: Retrospectively collected from Peking Union Medical College Hospital (China).
      • 103 American public datasets: Publicly available datasets originally from American sources.
        • 2017 Lung CT Segmentation Challenge (LCTSC): 60 thoracic CT scan patients.
        • Pancreas-CT (PCT): 43 abdominal contrast-enhanced CT scan patients.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • For the 100 clinical datasets (China): Two radiation oncologists with more than 10 years of clinical practice established the ground truth annotations. Their detailed CVs are in Appendix 2 (not provided in the input, but referenced).
    • For the 103 American public datasets: Annotated by American doctors. (Specific qualifications not detailed in the provided text).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • For the 100 clinical datasets (China): The ground truth was established by two radiation oncologists. A third qualified internal staff member was available to adjudicate if needed. This implies a 2+1 adjudication method if there was disagreement.
    • For the 103 American public datasets: No explicit adjudication method is mentioned, only that they were "annotated by American doctors."

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The provided text does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study involving human readers with and without AI assistance to measure improvement in human performance. The study focuses on the standalone performance of the AI algorithm (DeepContour) compared to predicate devices.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done. The entire "Performance comparison" section (Tables 5, 6, 7, and 8) details the Dice coefficients and ASSD values for the DeepContour algorithm, directly comparing its segmentation performance against the ground truth and the predicate devices. There is no human reader involved in generating the DeepContour results reported in these tables.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • For the 100 clinical datasets (China): Expert consensus (two radiation oncologists applying RTOG and clinical guidelines using manual annotation, with a third available for adjudication).
    • For the 103 American public datasets: Expert annotation by American doctors. (Implied expert consensus or single expert annotation from the original dataset creation process, as described by the original publications).

    8. The sample size for the training set

    • # of Datasets: 800 CT images.
      • 200 for head and neck region
      • 200 for chest region
      • 200 for abdomen region
      • 200 for pelvic region
      • (Out of these, 160 cases per region were used for training, and 40 cases per region for validation.)

    9. How the ground truth for the training set was established

    The initial segmentations were reviewed and corrected by two radiation oncologists for model training, with a third qualified internal staff member available to adjudicate if needed. This indicates an expert review and correction process, likely similar to the 2+1 adjudication method used for the test set ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232899
    Date Cleared
    2024-04-03

    (198 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QKB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR predefined structures using deep-leaming-based algorithms.

    Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.

    The output of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.

    The software is not intended to automatically detect or contour lesions.

    Device Description

    AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.

    CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOM-RTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the TPS, which is the standard location for the planning of radiation therapy).

    The output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals. The qualified medical professional can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for AI-Rad Companion Organs RT:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria and reported performance are detailed for both MR and CT contouring algorithms.

    MR Contouring Algorithm Performance

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Average)
    MR Contouring OrgansThe average segmentation accuracy (Dice value) of all subject device organs should be equivalent or better than the overall segmentation accuracy of the predicate device. The overall fail rate for each organ/anatomical structure is smaller than 15%.Dice [%]: 85.75% (95% CI: [82.85, 87.58])
    ASSD [mm]: 1.25 (95% CI: [0.95, 2.02])
    Fail [%]: 2.75%
    (Compared to Reference Device MRCAT Pelvis (K182888))AI-Rad Companion Organs RT VA50 – all organs: 86% (83-88)
    AI-Rad Companion Organs RT VA50 – common organs: 82% (78-84)
    MRCAT Pelvis (K182888) – all organs: 77% (75-79)

    CT Contouring Algorithm Performance

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Average)
    Organs in Predicate DeviceAll the organs segmented in the predicate device are also segmented in the subject device. The average (AVG) Dice score difference between the subject and predicate device is smaller than 3%.(The document states "equivalent or had better performance than the predicate device" implicitly meeting this, but does not give a specific numerical difference.)
    New Organs for Subject DeviceBaseline value defined by subtracting the reference value using 5% error margin in case of Dice and 0.1 mm in case of ASSD. The subject device in the selected reference metric has a higher value than the defined baseline value.Regional Averages:
    Head & Neck: Dice 76.5%
    Head & Neck lymph nodes: Dice 69.2%
    Thorax: Dice 82.1%
    Abdomen: Dice 88.3%
    Pelvis: Dice 84.0%

    2. Sample Sizes Used for the Test Set and Data Provenance

    • MR Contouring Algorithm Test Set:
      • Sample Size: N = 66
      • Data Provenance: Retrospective study, data from multiple clinical sites across North America & Europe. The document further breaks this down for different sequences:
        • T1 Dixon W: 30 datasets (USA: 15, EU: 15)
        • T2 W TSE: 36 datasets (USA: 25, EU: 11)
        • Manufacturer: All Siemens Healthineers scanners.
    • CT Contouring Algorithm Test Set:
      • Sample Size: N = 414
      • Data Provenance: Retrospective study, data from multiple clinical sites across North American, South American, Asia, Australia, and Europe. This dataset is distributed across three cohorts:
        • Cohort A: 73 datasets (Germany: 14, Brazil: 59) - Siemens scanners only
        • Cohort B: 40 datasets (Canada: 40) - GE: 18, Philips: 22 scanners
        • Cohort C: 301 datasets (NA: 165, EU: 44, Asia: 33, SA: 19, Australia: 28, Unknown: 12) - Siemens: 53, GE: 59, Philips: 119, Varian: 44, Others: 26 scanners

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • The ground truth annotations were "drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists."
    • "Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
    • The exact number of individual annotators or experts is not specified beyond "a team" and "a board-certified radiation oncologist." Their specific experience level (e.g., "10 years of experience") is not given beyond "experienced" and "board-certified."

    4. Adjudication Method for the Test Set

    • The document implies a consensus/adjudication process: "a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This suggests that initial annotations by the "experienced annotators" were reviewed and potentially corrected by a higher-level expert. The specific number of reviewers for each case (e.g., 2+1, 3+1) is not explicitly stated, but it was at least a "team" providing initial annotations followed by a "board-certified radiation oncologist" for quality assessment/correction.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • No, the document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The validation studies focused on the standalone performance of the algorithm against expert-defined ground truth.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, the performance validation described in section 10 ("Performance Software Validation") is a standalone (algorithm only) performance study. The metrics (Dice, ASSD, Fail Rate) compare the algorithm's output directly to the established ground truth. The device produces contours that must be reviewed and edited by trained medical professionals, but the validation tests the AI's direct output.

    7. The Type of Ground Truth Used

    • The ground truth used was expert consensus/manual annotation. It was established by "manual annotation" by "experienced annotators mentored by radiologists or radiation oncologists" and subsequently reviewed and corrected by a "board-certified radiation oncologist." Annotation protocols followed NRG/RTOG guidelines.

    8. The Sample Size for the Training Set

    • MR Contouring Algorithm Training Set:
      • T1 VIBE/Dixon W: 219 datasets
      • T2 W TSE: 225 datasets
      • Prostate (T2W): 960 datasets
    • CT Contouring Algorithm Training Set: The training dataset sizes vary per organ group:
      • Cochlea: 215
      • Thyroid: 293
      • Constrictor Muscles: 335
      • Chest Wall: 48
      • LN Supraclavicular, Axilla Levels, Internal Mammaries: 228
      • Duodenum, Bowels, Sigmoid: 332
      • Stomach: 371
      • Pancreas: 369
      • Pulmonary Artery, Vena Cava, Trachea, Spinal Canal, Proximal Bronchus: 113
      • Ventricles & Atriums: 706
      • Descending Coronary Artery: 252
      • Penile Bulb: 854
      • Uterus: 381

    9. How the Ground Truth for the Training Set Was Established

    • For both training and validation data, the ground truth annotations were established using the "Standard Annotation Process." This involved:
      • Annotation protocols defined following NRG/RTOG guidelines.
      • Manual annotations drawn by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool.
      • A quality assessment including review and correction of each annotation by a board-certified radiation oncologist using validated medical image annotation tools.
    • The document explicitly states that the "training data used for the training of the algorithm is independent of the data used to test the algorithm."
    Ask a Question

    Ask a specific question about this device

    Page 1 of 4