Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221706
    Device Name
    AccuContour
    Date Cleared
    2023-03-09

    (269 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AccuContour

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    It is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    Device Description

    The proposed device, AccuContour, is a standalone software which is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has two image processing functions:

    • (1) Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,
    • (2) Automatic registration: rigid and deformable registration, and
    • (3) Manual contouring.

    It also has the following general functions:

    • Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    • Patient management;
    • Review of processed images;
    • Extension tool;
    • Plan evaluation and plan comparison;
    • Dose analysis.
    AI/ML Overview

    This document (K221706) is a 510(k) Premarket Notification for the AccuContour device by Manteia Technologies Co., Ltd. It declares substantial equivalence to a predicate device and several reference devices. The focus here is on the performance data related to the "Deep learning contouring" feature and the "Automatic registration" feature.

    Based on the provided document, here's a detailed breakdown of the acceptance criteria and the study proving the device meets them:

    I. Acceptance Criteria and Reported Device Performance

    The document does not explicitly provide a clear table of acceptance criteria and the reported device performance for the deep learning contouring in the format requested. Instead, it states that "Software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." This implies that internal acceptance criteria were met, but these specific criteria and the detailed performance results (e.g., dice scores, Hausdorff distance for contours) are not disclosed in this summary.

    However, for the deformable registration, it provides a comparative statement:

    FeatureAcceptance Criteria (Implied)Reported Device Performance
    Deformable RegistrationNon-inferiority to reference device (K182624) based on Normalized Mutual Information (NMI)The NMI value of the proposed device was non-inferior to that of the reference device.

    It's important to note:

    • For Deep Learning Contouring: No specific performance metrics or acceptance criteria are listed in this 510(k) summary. The summary only broadly mentions that the software "can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas." The success is implicitly covered by the "Software verification and validation testing" section.
    • For Automatic Registration: The criterion is non-inferiority in NMI compared to a reference device. The specific NMI values are not provided, only the conclusion of non-inferiority.

    II. Sample Size and Data Provenance

    • Test Set (for Deformable Registration):
      • Sample Size: Not explicitly stated as a number, but described as "multi-modality image sets from different patients."
      • Data Provenance: "All fixed images and moving images are generated in healthcare institutions in U.S." This indicates prospective data collection (or at least collected with the intent for such testing) from the U.S.
    • Training Set (for Deep Learning Contouring):
      • Sample Size: Not explicitly stated in the provided document.
      • Data Provenance: Not explicitly stated in the provided document.

    III. Number of Experts and Qualifications for Ground Truth

    • For the Test Set (Deformable Registration): The document does not mention the use of experts or ground truth establishment for the deformable registration test beyond the use of NMI for "evaluation." NMI is an image similarity metric and does not typically require human expert adjudication of registration quality in the same way contouring might.
    • For the Training Set (Deep Learning Contouring): The document does not specify the number of experts or their qualifications for establishing ground truth for the training set.

    IV. Adjudication Method for the Test Set

    • For Deformable Registration: Not applicable in the traditional sense, as NMI is an objective quantitative metric. There's no mention of human adjudication for registration quality here.
    • For Deep Learning Contouring (Test Set): The document notes there was no clinical study included in this submission. This implies that if a test set for the deep learning contouring was used, its ground truth (and any adjudication process for it) is not described in this 510(k) summary. Given the absence of a clinical study, it's highly probable that ground truth for performance evaluation of deep learning contouring was established internally through expert consensus or other methods, but details are not provided.

    V. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done?: No, an MRMC comparative effectiveness study was not reported. The document explicitly states: "No clinical study is included in this submission."
    • Effect Size: Not applicable, as no such study was performed or reported.

    VI. Standalone (Algorithm Only) Performance Study

    • Was it done?: Yes, for the deformable registration feature. The NMI evaluation was "on two sets of images for both the proposed device and reference device (K182624), respectively." This is an algorithm-only (standalone) comparison.
    • For Deep Learning Contouring: While the deep learning contouring is a standalone feature, the document does not provide details of its standalone performance evaluation (e.g., against expert ground truth). It only states that software verification and validation were performed to meet acceptance criteria.

    VII. Type of Ground Truth Used

    • Deformable Registration: The "ground truth" for the deformable registration evaluation was implicitly the images themselves, with NMI being used as a metric to compare the alignment achieved by the proposed device versus the reference device. It's an internal consistency/similarity metric rather than a "gold standard" truth established by external means like pathology or expert consensus.
    • Deep Learning Contouring: Not explicitly stated in the provided document. Given that it's an AI-based contouring tool and no clinical study was performed, the ground truth for training and internal testing would typically be established by expert consensus (e.g., radiologist or radiation oncologist contours) or pathology, but the document does not specify.

    VIII. Sample Size for the Training Set

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration.

    IX. How Ground Truth for the Training Set was Established

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration. For deep learning, expert-annotated images are the typical method, but details are absent here.
    Ask a Question

    Ask a specific question about this device

    K Number
    K191928
    Device Name
    AccuContour
    Date Cleared
    2020-02-28

    (224 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AccuContour

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    It is used by radiation oncology department to register multimodality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    Device Description

    The proposed device, AccuContour™, is a standalone software which is used by radiation oncology department to register multimodality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has two image process functions:
    (1) Deep learning contouring: it can automatically contour the organ-at-risk, including head and neck, thorax, abdomen and pelvis (for both male and female),
    (2) Automatic Registration, and
    (3) Manual Contour.

    It also has the following general functions:
    Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    A Patient management;
    Review of processed images;
    Open and Save of files.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the AccuContour™ device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state numerical acceptance criteria for DICE Similarity Coefficients (DSC) for segmentation or Normalized Mutual Information (NMI) for registration. Instead, it states the acceptance criterion is non-inferiority compared to the predicate device.

    Performance MetricAcceptance CriteriaReported Device Performance
    Segmentation (DSC)Non-inferiority to predicate device (K182624)DSC of proposed device was non-inferior compared to predicate device K182624
    Registration (NMI)Non-inferiority to predicate device (K182624)NMI of proposed device was non-inferior compared to predicate device K182624

    2. Sample Size Used for the Test Set and Data Provenance

    • Segmentation Performance Test:

      • Test Set Description: Two separate tests were performed.
        • One test involved images generated in healthcare institutions in China using scanner models from GE, Siemens, and Philips.
        • The other test involved images generated in healthcare institutions in the US using scanner models from GE, Siemens, and Philips.
        • For each body part, all intended organs were included in images from both US and China datasets.
      • Sample Size: The exact number of images or cases in each test set is not specified.
      • Data Provenance: Retrospective, from healthcare institutions in China and the US.
    • Registration Performance Test:

      • Test Set Description: Two separate tests were performed.
        • One test involved images generated in healthcare institutions in China using scanner models from GE, Siemens, and Philips, tested on multi-modality image sets from the same patients.
        • The other test involved most images generated in healthcare institutions in the US, with a small amount of moving images adopted from online databases (originally from non-US sources). This test was on multi-modality image sets from different patients.
        • Both tests covered various modalities (CT/CT, CT/MR, CT/PET).
      • Sample Size: The exact number of images or cases in each test set is not specified.
      • Data Provenance: Retrospective, from healthcare institutions in China and the US, with some online database images (non-US origin) for the US registration test.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: At least three licensed physicians.
    • Qualifications of Experts: Licensed physicians. (Further sub-specialty or years of experience are not specified, but licensure implies a professional medical qualification.)

    4. Adjudication Method for the Test Set

    The ground truth was generated from the consensus of at least three licensed physicians. This implies an adjudication method where all experts agree, or a majority agreement based on the "consensus" phrasing, but the specific process (e.g., voting, discussion to reach full agreement) is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed or reported in this summary. The comparison was algorithm-to-algorithm (proposed device vs. predicate device), not involving human readers' performance with and without AI assistance.

    6. Standalone Performance Study

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was performed. The segmentation and registration accuracies (DICE and NMI respectively) were calculated for the proposed device's algorithm and compared to the predicate device's algorithm.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus. Specifically, for both segmentation and registration, ground truthing of each image was generated from the consensus of at least three licensed physicians.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set. It only describes the test sets.

    9. How the Ground Truth for the Training Set was Established

    The document does not provide information on how the ground truth for the training set was established. It only details the ground truth establishment for the test sets.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1