Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K221706
    Device Name
    AccuContour
    Date Cleared
    2023-03-09

    (269 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K182624, K173636, K181572

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    It is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    Device Description

    The proposed device, AccuContour, is a standalone software which is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has two image processing functions:

    • (1) Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,
    • (2) Automatic registration: rigid and deformable registration, and
    • (3) Manual contouring.

    It also has the following general functions:

    • Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    • Patient management;
    • Review of processed images;
    • Extension tool;
    • Plan evaluation and plan comparison;
    • Dose analysis.
    AI/ML Overview

    This document (K221706) is a 510(k) Premarket Notification for the AccuContour device by Manteia Technologies Co., Ltd. It declares substantial equivalence to a predicate device and several reference devices. The focus here is on the performance data related to the "Deep learning contouring" feature and the "Automatic registration" feature.

    Based on the provided document, here's a detailed breakdown of the acceptance criteria and the study proving the device meets them:

    I. Acceptance Criteria and Reported Device Performance

    The document does not explicitly provide a clear table of acceptance criteria and the reported device performance for the deep learning contouring in the format requested. Instead, it states that "Software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." This implies that internal acceptance criteria were met, but these specific criteria and the detailed performance results (e.g., dice scores, Hausdorff distance for contours) are not disclosed in this summary.

    However, for the deformable registration, it provides a comparative statement:

    FeatureAcceptance Criteria (Implied)Reported Device Performance
    Deformable RegistrationNon-inferiority to reference device (K182624) based on Normalized Mutual Information (NMI)The NMI value of the proposed device was non-inferior to that of the reference device.

    It's important to note:

    • For Deep Learning Contouring: No specific performance metrics or acceptance criteria are listed in this 510(k) summary. The summary only broadly mentions that the software "can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas." The success is implicitly covered by the "Software verification and validation testing" section.
    • For Automatic Registration: The criterion is non-inferiority in NMI compared to a reference device. The specific NMI values are not provided, only the conclusion of non-inferiority.

    II. Sample Size and Data Provenance

    • Test Set (for Deformable Registration):
      • Sample Size: Not explicitly stated as a number, but described as "multi-modality image sets from different patients."
      • Data Provenance: "All fixed images and moving images are generated in healthcare institutions in U.S." This indicates prospective data collection (or at least collected with the intent for such testing) from the U.S.
    • Training Set (for Deep Learning Contouring):
      • Sample Size: Not explicitly stated in the provided document.
      • Data Provenance: Not explicitly stated in the provided document.

    III. Number of Experts and Qualifications for Ground Truth

    • For the Test Set (Deformable Registration): The document does not mention the use of experts or ground truth establishment for the deformable registration test beyond the use of NMI for "evaluation." NMI is an image similarity metric and does not typically require human expert adjudication of registration quality in the same way contouring might.
    • For the Training Set (Deep Learning Contouring): The document does not specify the number of experts or their qualifications for establishing ground truth for the training set.

    IV. Adjudication Method for the Test Set

    • For Deformable Registration: Not applicable in the traditional sense, as NMI is an objective quantitative metric. There's no mention of human adjudication for registration quality here.
    • For Deep Learning Contouring (Test Set): The document notes there was no clinical study included in this submission. This implies that if a test set for the deep learning contouring was used, its ground truth (and any adjudication process for it) is not described in this 510(k) summary. Given the absence of a clinical study, it's highly probable that ground truth for performance evaluation of deep learning contouring was established internally through expert consensus or other methods, but details are not provided.

    V. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done?: No, an MRMC comparative effectiveness study was not reported. The document explicitly states: "No clinical study is included in this submission."
    • Effect Size: Not applicable, as no such study was performed or reported.

    VI. Standalone (Algorithm Only) Performance Study

    • Was it done?: Yes, for the deformable registration feature. The NMI evaluation was "on two sets of images for both the proposed device and reference device (K182624), respectively." This is an algorithm-only (standalone) comparison.
    • For Deep Learning Contouring: While the deep learning contouring is a standalone feature, the document does not provide details of its standalone performance evaluation (e.g., against expert ground truth). It only states that software verification and validation were performed to meet acceptance criteria.

    VII. Type of Ground Truth Used

    • Deformable Registration: The "ground truth" for the deformable registration evaluation was implicitly the images themselves, with NMI being used as a metric to compare the alignment achieved by the proposed device versus the reference device. It's an internal consistency/similarity metric rather than a "gold standard" truth established by external means like pathology or expert consensus.
    • Deep Learning Contouring: Not explicitly stated in the provided document. Given that it's an AI-based contouring tool and no clinical study was performed, the ground truth for training and internal testing would typically be established by expert consensus (e.g., radiologist or radiation oncologist contours) or pathology, but the document does not specify.

    VIII. Sample Size for the Training Set

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration.

    IX. How Ground Truth for the Training Set was Established

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration. For deep learning, expert-annotated images are the typical method, but details are absent here.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200323
    Device Name
    AutoContour
    Manufacturer
    Date Cleared
    2020-10-30

    (263 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K130393, K181572

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AutoContour is intended to assist radiation treatment planners in contouring structures within medical images in preparation for radiation therapy treatment planning.

    Device Description

    AutoContour consists of 3 main components:

    1. An "agent" service designed to run on the Windows Operating System that is configured by the user to monitor a network storage location for new CT datasets that are to be automatically uploaded to:
    2. A cloud-based AutoContour automatic contouring service that produces initial contours and
    3. A web application accessed via web browser which allows the user to perform registration with other image sets as well as review, edit, and export the structure set containing the contours.
    AI/ML Overview

    The provided text describes the acceptance criteria and study proving the device meets those criteria. Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria & Reported Device Performance

    The document states that formal acceptance criteria and reported device performance are detailed in "Radformation's AutoContour Complete Test Protocol and Report." However, this specific report is not included in the provided text. The summary only generally states that "Nonclinical tests were performed... which demonstrates that AutoContour performs as intended per its indications for use" and "Verification and validation tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements."

    Therefore, a table of acceptance criteria and reported device performance cannot be constructed from the provided text.

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions that "tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model." However, the sample size for the test set is not explicitly stated.

    Regarding data provenance:

    • The document implies the data used was medical image data (specifically CT, and for registration purposes, MR and PET).
    • The country of origin is not specified.
    • The terms "training and validation sets" and "independent datasets" suggest these were retrospective datasets used for model development and evaluation. There is no mention of prospective data collection.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not provide any information about the number of experts used to establish ground truth for the test set or their qualifications.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for the test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance?

    The document explicitly states: "As with the Predicate Devices, no clinical trials were performed for AutoContour." This indicates that an MRMC comparative effectiveness study involving human readers and AI assistance was not conducted. Therefore, no effect size for human reader improvement is reported.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document mentions "tests were performed on independent datasets from those included in training and validation sets in order to validate the generalizability of the machine learning model." This strongly suggests that standalone performance of the algorithm was evaluated. Although specific metrics for this standalone performance are not detailed in the provided text, the validation of a machine learning model against independent datasets implies a standalone evaluation.

    7. The Type of Ground Truth Used

    The document mentions that AutoContour is intended to "assist radiation treatment planners in contouring structures within medical images." Given this, the ground truth for the contours would typically be expert consensus or expert-annotated contours. However, the document itself does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data).

    8. The Sample Size for the Training Set

    The document mentions "training and validation sets" but does not provide the sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The document mentions "training and validation sets" but does not detail how the ground truth for the training set was established. Similar to the test set, it would likely involve expert contouring, but this is not explicitly stated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K193252
    Manufacturer
    Date Cleared
    2020-07-02

    (220 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K190379, K181572

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Contour ProtégéAl is used by trained medical professionals as a tool to aid in the automated processing of digital medical images of modalities CT and MR, as supported by ACR/NEMA DICOM 3.0. Contour ProtégéAl assists in the following indications:

    The creation of contours using machine-learning algorithms for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.

    Segmenting normal structures across a variety of CT anatomical locations.

    And segmenting normal structures of the prostate, seminal vesicles, and urethra within T2-weighted MR images.

    Contour ProtégéAI must be used in conjunction with MIM software to review and, if necessary, edit contours that were automatically generated by Contour ProtégAI.

    Device Description

    Contour ProtégéAl is an accessory to MIM software that automatically creates contours on medical images through the use of machine-learning algorithms. It is designed for use in the processing of medical images and operates on Windows, Mac, and Linux computer systems. Contour ProtégéAl is deployed on a remote server using the MIMcloud service for data management and transfer.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for MIM Software Inc.'s Contour ProtégéAI, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Non-inferiority to predicate deviceContour ProtégéAI was shown to be non-inferior to the predicate device (MIM) with regards to the mean Dice coefficient of automatically generated contours. Non-inferiority was established with a limit of 0.1 Dice, meaning the performance of Contour ProtégéAI was no more than 0.1 Dice worse than the predicate.
    Clinically acceptable performanceThe non-inferiority limit of 0.1 Dice was determined to be the largest clinically acceptable difference based on previous studies.
    Automated segmentation of CT imagesDemonstrated through the non-inferiority study on a test set of 286 CT images.
    Automated segmentation of MR imagesDemonstrated through the non-inferiority study on a test set of 72 MR images.

    Study Details

    1. Sample sizes used for the test set and data provenance:

      • CT Images: 286 images
      • MR Images: 72 images
      • Data Provenance: The test images were gathered from "a different and disjoint set of institutions from the training data." This indicates an independent, external validation set, likely retrospective in nature given that it's an existing dataset. The specific country of origin is not specified.
    2. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

      • The document does not explicitly state the number of experts or their qualifications for establishing the ground truth of the test set. It mentions "associated segmentations" for the training data but not how test set ground truth was created or by whom.
    3. Adjudication method for the test set:

      • The document does not specify an adjudication method (e.g., 2+1, 3+1). It states that the neural network models were evaluated against "associated segmentations," implying a reference truth was available.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study evaluating human reader improvement with AI assistance was performed or reported in this summary. The study focused on the standalone performance of the Contour ProtégéAI against a predicate device.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone study was done. The non-inferiority test directly compared the automatically generated contours of Contour ProtégéAI against those of the predicate device, both operating without human intervention for the contouring process itself. The instructions for Contour ProtégéAI do state that it "must be used in conjunction with MIM software to review and, if necessary, edit contours." However, the reported performance study focused on the initial automated segmentation output.
    6. The type of ground truth used:

      • The ground truth for both training and testing datasets consisted of "associated segmentations." While not explicitly stated, these are typically expert-generated contours, often from trained medical professionals (e.g., oncologists, radiation oncologists, dosimetrists) or highly experienced image analysts. The document does not specify if pathology or outcomes data were used as ground truth.
    7. The sample size for the training set:

      • The document states that the neural networks were "trained on datasets from several large institutions." It does not provide a specific number of images or cases used in the training set.
    8. How the ground truth for the training set was established:

      • The training datasets included "CT images and MR images and their associated segmentations." This implies that expert-generated contours were available alongside the images for training the machine-learning models. The specific process or number of experts involved in creating these training segmentations is not detailed in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1