Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K241490
    Manufacturer
    Date Cleared
    2024-10-18

    (147 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Contour+ (MVision AI Segmentation)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Contour+ (MVision Al Segmentation) is a software system for image analysis algorithms to be used in radiation therapy treatment planning workflows. The system includes processing tools for automatic contouring of CT and MR images using machine learning based algorithms. The produced segmentation templates for regions of interest must be transferred to appropriate image visualization systems as an initial template for a medical professional to visualize, review, modify and approve prior to further use in clinical workflows.

    The system creates initial contours of pre-defined structures of common anatomical sites, i.e., Head and Neck, Brain, Breast, Lung and Abdomen, Male Pelvis, and Female Pelvis.

    Contour+ (MVision Al Segmentation) is not intended to detect lesions or tumors. The device is not intended for use with real-time adaptive planning workflows.

    Device Description

    Contour+ (MVision Al Segmentation) is a software-only medical device (software system) that can be used to accelerate region of interest (ROI) delineation in radiotherapy treatment planning by automatic contouring of predefined ROIs and the creation of segmentation templates on CT and MR images.

    The Contour+ (MVision Al Segmentation) software system is integrated with a customer IT network and configured to receive DICOM CT and MR images, e.g., from a CT or MRI scanner or a treatment planning system (TPS). Automatic contouring of predefined ROIs is performed by pre-trained, locked, and static models that are based on machine learning using deep artificial neural networks. The models have been trained on several anatomical sites, including the brain, head and neck, bones, breast, lung and abdomen, male pelvis, and female pelvis using hundreds of scans from a diverse patient population. The user does not have to provide any contouring atlases. The resulting segmentation structure set is connected to the original DICOM images and can be transferred to an image visualization system (e.g., a TPS) as an initial template for a medical professional to visualize, modify and approve prior to further use in clinical workflows.

    AI/ML Overview

    The provided text does not include a table of acceptance criteria and the reported device performance, nor does it specify the sample sizes used for the test set, the number of experts for ground truth, or details on comparative effectiveness studies (MRMC).

    However, based on the available information, here is a description of the acceptance criteria and study details:

    Acceptance Criteria and Study for Contour+ (MVision AI Segmentation)

    The study evaluated the performance of automatic segmentation models by comparing them to ground truth segmentations using Dice Score (DSC) and Surface-Dice Score (S-DSC@2mm) as metrics. The acceptance criteria were based on a "set level of minimum agreement against ground truth segmentations determined through clinically relevant similarity metrics DSC and S-DSC@2mm." While specific numerical thresholds for these metrics are not provided, the submission states that the device fulfills "the same acceptance criteria" as the predicate device.

    It's important to note that the provided document is an FDA 510(k) clearance letter and not the full study report. As such, it summarizes the findings and affirms the device's substantial equivalence without detailing every specific test result or acceptance threshold.


    1. A table of acceptance criteria and the reported device performance

    MetricAcceptance CriteriaReported Device Performance
    Dice Score (DSC)Based on a "set level of minimum agreement against ground truth segmentations" (specific thresholds not provided)."Performance verification and validation results for various subsets of the golden dataset show the generalizability and robustness of the device..."
    Surface-Dice Score (S-DSC@2mm)Based on a "set level of minimum agreement against ground truth segmentations" (specific thresholds not provided)."...Contour+ (MVision AI Segmentation) fulfills the same acceptance criteria, provides the intended benefits, and it is as safe and as effective as the predicate software version."

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The exact sample size for the test (golden) dataset is not specified, but it's referred to as "various subsets of the golden dataset" and chosen to "achieve high granularity in performance evaluation tests."
    • Data Provenance: The datasets originate from "multiple EU and US clinical sites (with over 50% of data coming from US sites)." It is described as containing "hundreds of scans from a diverse patient population," ensuring representation of the "US population and medical practice." The text does not explicitly state if the data was retrospective or prospective, but the description of "hundreds of scans" from multiple sites suggests it is likely retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The number of experts used to establish the ground truth for the test set is not specified in the provided text. The qualifications are vaguely mentioned as "radiotherapy experts" who performed "Performance validation of machine learning-based algorithms for automatic segmentation." No specific years of experience or board certifications are detailed.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The adjudication method for establishing ground truth on the test set is not specified in the provided text. The text only states that the auto-segmentations were compared to "ground truth segmentations."


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study focusing on the improvement of human readers with AI assistance versus without AI assistance is not explicitly described in the provided text.

    The text states: "Performance validation of machine learning-based algorithms for automatic segmentation was also carried out by radiotherapy experts. The results show that Contour+ (MVision AI Segmentation) assists in reducing the upfront effort and time required for contouring CT and MR images, which can instead be devoted by clinicians on refining and reviewing the software-generated contours." This indicates that experts reviewed the output and perceived a benefit in efficiency, but it does not detail a formal MRMC study comparing accuracy or time, with a specific effect size.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the algorithm was conducted. The primary performance metrics (DSC and S-DSC@2mm) were calculated by directly comparing the "produced auto-segmentations to ground truth segmentations," which is a standalone assessment of the algorithm's output. The statement "Performance verification and validation results for various subsets of the golden dataset show the generalizability and robustness of the device" further supports this.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used was expert consensus segmentations. The text repeatedly refers to comparing the device's output to "ground truth segmentations" established by "radiotherapy experts." There is no mention of pathology or outcomes data being used for ground truth.


    8. The sample size for the training set

    The exact sample size for the training set is not specified, but the models were "trained on several anatomical sites... using hundreds of scans from a diverse patient population."


    9. How the ground truth for the training set was established

    The text states that the machine learning models were "trained on several anatomical sites... using hundreds of scans from a diverse patient population." While it doesn't explicitly detail the process for establishing ground truth for the training set, it is implied to be through expert contouring/segmentation, as the validation uses "ground truth segmentations" which are established by "radiotherapy experts." Given the extensive training data required for machine learning, it's highly probable that these "hundreds of scans" also had expert-derived segmentations as their ground truth for training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212915
    Manufacturer
    Date Cleared
    2022-05-03

    (232 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MVision AI Segmentation

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MVision AI Segmentation is a software system for image analysis algorithms to be used in radiation therapy treatment planning workflows. The system includes processing tools for automatic contouring of CT images using machine learning based algorithms. The produced segmentation templates for regions of interest must be transferred to appropriate image visualization systems as an initial template for a medical professional to visualize, review, modify and approve prior to further use in clinical workflows.

    The system creates initial contours of pre-defined structures of common anatomical sites, i.e. Head and Neck, Brain, Breast, Lung and Abdomen, Male Pelvis, and Female Pelvis in adult patients.

    MVision AI Segmentation is not intended to detect lesions or tumors. The device is not intended for use with real-time adaptive planning workflows.

    Device Description

    MVision AI Segmentation is a software only medical device which can be used to accelerate region of interest (ROI) delineation in radiotherapy treatment planning by creating automatic segmentation templates on CT images for these ROIs.

    The segmentations are produced by pre-trained, locked, and static models that are based on deep artificial neural networks. The produced structure is intended to be used as a template for medical professionals to visualize, modify and approve prior to further use in clinical workflows.

    The system is integrated with the customer IT network to receive DICOM images. CT images from, for example, a scanner or a treatment planning system (TPS) are exported to the device. A structure set is created in the device, and the created segmentation results are connected to the original images. These data are sent to the destination DICOM import folder to import the data to, for example, a treatment planning system. The produced structures can then be used as a template for manual ROI editing, review and approval workflow. The segmentations are produced by pre-trained and locked models that are based on deep artificial neural networks. To take the device into use, the user does not have to provide any contouring atlases. The models have been trained with the order of hundreds of scans, depending on the ROI in question. The MVision AI Segmentation device creates initial contours of pre-defined structures of common anatomical sites, i.e. Head and Neck, Brain, Breast, Lung and Abdomen, Male Pelvis, and Female Pelvis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the MVision AI Segmentation device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state specific numerical acceptance criteria for evaluation metrics (e.g., a minimum Dice Similarity Coefficient (DSC) or Hausdorff Distance (HD)). Instead, it generally states that the device's performance will "reflect the real clinical performance" and that it produces "usable contours" that "save clinicians' time."

    Therefore, I will extract relevant performance statements and structure them as best as possible, acknowledging the lack of specific thresholds.

    Criterion TypeAcceptance Criteria (Conceptual from text)Reported Device Performance (from text)
    Clinical PerformanceSegmentation performance should reflect real clinical performance in any radiotherapy clinic following consensus guidelines."Performance verification results for various subsets of the golden dataset show the generalizability and robustness of the device for the US patient population and US medical practice." "MVision AI Segmentation assists in reducing the upfront effort and time on typical contouring which can be spent on refining and reviewing the results." "Performance validation data further suggests that the subject device produces usable contours (ROIs) as a starting point that will save clinicians' time and it will lead to sooner proceeding to essential parts of radiotherapy treatment planning stages."
    GeneralizabilityModels should be generalizable and robust across different patient populations and medical practices."Performance verification results for various subsets of the golden dataset show the generalizability and robustness of the device for the US patient population and US medical practice."
    Clinical UtilityDevice should provide usable contours that contribute to efficiency and reduce effort in the radiotherapy workflow."Performance validation data further suggests that the subject device produces usable contours (ROIs) as a starting point that will save clinicians' time and it will lead to sooner proceeding to essential parts of radiotherapy treatment planning stages." "MVision AI Segmentation assists in reducing the upfront effort and time on typical contouring which can be spent on refining and reviewing the results."
    Safety and EffectivnesThe device should be non-inferior, safe, and effective compared to the predicate device."Software verification and validation and Performance evaluation tests for machine learning based algorithms establish that the subject medical device is non-inferior, performs safely and effectively as the listed predicate device."

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Test Set ("Golden Dataset") Sample Size: The exact number of cases in the test set is not explicitly stated. The document mentions "various subsets of the golden dataset."
    • Data Provenance: The data originates from "multiple different sources" to ensure generalizability. It is collected to reflect "the US patient population and US medical practice." The text does not specify countries of origin beyond "US patient population." The data type is implied to be CT images for use in radiotherapy. The text does not specify if the data is retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The number of experts used to establish ground truth for the test set is not explicitly stated.
    • Qualifications of Experts: The ground truth for the test set was established by "radiotherapy experts." No further specific qualifications (e.g., years of experience, specific subspecialty) are provided.

    4. Adjudication Method for the Test Set

    • The text does not describe a specific adjudication method (e.g., 2+1, 3+1). It only states that the ground truth was established by "radiotherapy experts."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study comparing human readers with and without AI assistance was reported. The document focuses on the device's performance and its ability to "assist in reducing the upfront effort and time" for clinicians, implying an improvement in efficiency, but not a formal MRMC study demonstrating a quantified effect size of human improvement with AI vs without.

    6. Standalone Performance Study (Algorithm Only)

    • Yes, a standalone performance evaluation was clearly done. The entire "Performance Evaluation Summary" Section (Pages 7-8) describes the evaluation of the "model performance" and "machine learning based algorithms" on "training and test sets (golden dataset)." The results refer to the device producing contours and assisting in reducing effort, indicating an algorithm-only evaluation.

    7. Type of Ground Truth Used

    • The ground truth used is expert consensus, established by "radiotherapy experts" following "segmentation consensus guidelines."

    8. Sample Size for the Training Set

    • The models were trained with "the order of hundreds of scans, depending on the ROI in question."

    9. How the Ground Truth for the Training Set Was Established

    • The ground truth for the training set was established following "segmentation consensus guidelines" as the models were "trained to comply with" these guidelines. This implies expert-derived ground truth, consistent with the test set's ground truth methodology.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1