Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K202928
    Device Name
    DV. Target
    Manufacturer
    Date Cleared
    2021-04-02

    (185 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DV. Target

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DV.Target is a software application that enables the routing of DICOM-compliant data (CT Images) to automatic image processing workflows, using machine learning-based algorithms to automatically delineate organs-at-risk (OARs). Contours generated by DV.Target may be used as an input to clinical workflows for treatment planning in radiation therapy.

    DV.Target is intended to be used by trained medical professionals including radiologists, radiation oncologists, dosimetrists, and physicists.

    DV.Target does not provide a user interface for data visualization. Image data uploaded, auto-contouring results, and other functionalities are managed via an administration interface. Thus, it is required that DV.Target be used in conjunction with appropriate software, such as a treatment planning system (TPS), to review, edit, and approve for all contours generated by DV.Target.

    DV.Target is only intended for normal organ contouring, not for tumor or clinical target volume contouring.

    Device Description

    The proposed device, DV.Target, is a standalone software that is designed to be used by trained medical professionals to automatically delineate organs-at-risk (OARs) on CT images. This OARs delineation function, often referred as auto-contouring, is intended to facilitate radiation therapy workflows. Supported image modalities include CT and RTSTURCT.

    DV.Target can automatically delineate major OARs in three anatomical sites --- Head & Neck, Thorax, and Abdomen & Pelvis. It receives CT images in DICOM format as input and automatically generates the contours of OARs, which are stored in DICOM format and in RTSTRUCT modality.

    The deployment environment of the proposed device is recommended to be a local network with an existing hospital-grade IT system in place. DV.Target should be installed on a specialized server supporting deep learning processing. After installation, users can login to the DV.Target administration interface via browsers from their local computers. All activities, including autocontouring, are operated by users through the administration interface.

    In addition to auto-contouring, DV.Target also has the following auxiliary functions:

    • User interface for receiving, updating and transmitting medical images in DICOM format;
    • User management;
    • Processed image management and output (RTSTRUCT) file management.

    Once data is routed to DV.Target auto-contouring workflows, no user interaction is required, nor provided. The image data, auto-contouring results, and other functionalities can be managed by DV.Target users via an administration user interface. Third-party image visualization and editing software, such as a treatment planning system (TPS), must be used to facilitate the review and editing of contours generated by DV.Target.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the DV.Target device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for DV.Target is non-inferiority to the predicate and reference devices, as measured by the Dice-Sørensen coefficient (DICE score) for auto-contouring accuracy.

    Metric (Acceptance Criteria)Reported Device Performance
    Non-inferiority to Predicate device (Mirada) for 19 overlapping OARs (measured by DICE score)Achieved: "DV.Target is non-inferior to the predicate device Mirada on all 19 overlapping OARs." (Supported by Comparison Studies 1 & 2)
    Non-inferiority to Reference device (MIM) for 30 non-overlapping OARs (measured by DICE score)Achieved: "DV.Target is non-inferior to the reference device MIM on the 30 non-overlapping OARs." (Supported by Comparison Studies 3a & 3b)
    Performance of non-overlapping OARs similar to overlapping OARs (measured by DICE score)Achieved: "The performance of DV.Target on the non-overlapping OARs is similar to its performance on the overlapping OARs." (Supported by Comparison Study 3b)

    2. Sample Sizes Used for the Test Set and Data Provenance

    The study utilized two independent datasets for testing:

    • Public Validation Dataset:

      • Data Provenance: "a large medical images archive --- TCIA" (The Cancer Imaging Archive). 64% of this data is from the US.
      • Approximate Sample Size (implied): This dataset was used for Comparison Study 1 and Comparison Study 3. While a specific number of cases isn't given, it's described as a "public validation dataset" used for evaluating 19 overlapping OARs and 30 non-overlapping OARs, implying a substantial dataset for statistical analysis across multiple organs.
      • Retrospective/Prospective: Retrospective (implied, as it's from an archive).
    • In-house Clinical Dataset:

      • Data Provenance: "retrospectively from the City of Hope (our primary validation site)."
      • Approximate Sample Size (implied): This dataset was used for Comparison Study 2 for evaluating overlapping OARs. Similar to the public dataset, a specific number of cases isn't given, but it's used for statistical evaluation.
      • Retrospective/Prospective: Retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Public Validation Dataset:

      • Number of Experts: Three.
      • Qualifications: "three board-certified physicians."
    • In-house Clinical Dataset:

      • Number of Experts: Not specified, but the ground truth was "based on actual clinical contouring results," implying it was established by clinical personnel.
      • Qualifications: Not specified, but would align with standard clinical practice for contouring.

    4. Adjudication Method for the Test Set

    • Public Validation Dataset: "The ground truth OARs contours on the public validation data were generated from the consensus of three board-certified physicians." This indicates an expert consensus method, likely implying that all three experts agreed on the contours.

    • In-house Clinical Dataset: "The ground truth contours on the in-house clinical data (collected retrospectively) were based on actual clinical contouring results." This implies adjudication through established clinical practice, but no specific multi-expert adjudication method (like 2+1 or 3+1) is mentioned.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not done. The studies focused on comparing the algorithm's performance (DV.Target) against other algorithms (predicate and reference devices), and against ground truth established by human experts. There is no mention of human readers using the AI and their performance being compared to human readers without the AI assistance to measure reader improvement.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done

    Yes, a standalone study was done. The entire set of "Comparison Studies" (Studies 1, 2, and 3) involved evaluating the "auto-contouring accuracy" of the DV.Target device. The text explicitly states, "Once data is routed to DV.Target auto-contouring workflows, no user interaction is required, nor provided." This confirms that the reported performance metrics (DICE scores) are solely based on the algorithm's output without human intervention.

    7. The Type of Ground Truth Used

    • Public Validation Dataset: Expert consensus (from three board-certified physicians).
    • In-house Clinical Dataset: Actual clinical contouring results. While derived from clinical practice, this can be considered a form of "expert" ground truth, as it represents the accepted clinical standard for those cases.

    8. The Sample Size for the Training Set

    The sample size for the training set is not provided in the given text. The document only mentions that the "validation data used in these studies... were invisible in model training."

    9. How the Ground Truth for the Training Set Was Established

    The method for establishing ground truth for the training set is not specified in the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1