Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K223834
    Device Name
    AccuCheck
    Date Cleared
    2023-07-20

    (210 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AccuCheck is a quality assurance software used for data transfer integrity check, secondary dose calculation with Monte Carlo algorithm, and treatment plan verification in radiotherapy. AccuCheck also provides independent dose verification based on LINAC delivery log after radiotherapy plan execution.
    AccuCheck is not a treatment planning system or a radiation delivery device. It is to be used only by trained radiation oncology personnel for quality assurance purposes.

    Device Description

    AccuCheck is a quality assurance software used for data transfer integrity check, secondary dose calculation with Monte Carlo algorithm, and treatment plan verification in radiotherapy. AccuCheck also provides independent dose verification based on LINAC delivery log after radiotherapy plan execution. AccuCheck is not a treatment planning system or a radiation delivery device. It is to be used only by trained radiation oncology personnel for quality assurance purposes.
    AccuCheck performs using the TPS Check module to check related parameters in the radiotherapy plan to determine if the plan is executable by the linear accelerator( LINAC).
    AccuCheck also performs using the Dose Check module to conduct dose calculation verification for radiation treatment plans before radiotherapy by doing an independent calculation of radiation dose using Monte Carlo algorithm. Radiation dose is initially calculated by a Treatment Planning System (TPS).
    AccuCheck performs using the Transfer Check module to verify the integrity of the treatment plan transmitted from TPS to the LINAC to check if errors occur during the transmission.
    AccuCheck performs dose delivery quality assurance for radiation treatment plans by using the measured data recorded in a LINAC's delivery log files to reconstruct executed plan and calculate delivered dose. This is achieved through the software module of the Subject Device called Pre-treatment Check and Treatment Check. The difference lies in the usage scenario, where Pre-treatment Check processes the logs of the first execution of the treatment plan in LINAC without a patient actually being treated, while treatment check processes the logs of the second and subsequent execution of the treatment plan in LINAC with a patient actually being treated. AccuCheck cannot be used for log verification, but rather for dose calculation based on logs such as LINAC delivery log data. The reconstruct of the executed plan and calculation of the delivered dose from delivery logs on LINAC machines, including Varian LINAC and Elekta LINAC, are supported by AccuCheck.
    The product provides with multiple tools to assist the analysis, including dose volume histogram, Gamma analysis, target coverage, Gamma passing rate of each ROI, dose statistics and clinical targets evaluation.

    AI/ML Overview

    AccuCheck: Acceptance Criteria and Performance Study

    This document outlines the acceptance criteria for the AccuCheck device and details the study conducted to demonstrate the device's performance against these criteria.

    1. Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary describes a secondary dose calculation verification test as a key performance evaluation. While explicit numerical acceptance criteria are not presented in a table format, the narrative indicates that "The results of all test cases passed the test criteria". Based on the checked items during the test, the implied acceptance criteria are the accurate and consistent representation of various dose analysis metrics by AccuCheck when compared to FDA-cleared Treatment Planning Systems (TPS).

    Table 1: Implied Acceptance Criteria and Reported Device Performance

    Acceptance CriterionReported Device Performance
    Accurate Dose-Volume Histogram (DVH) representationPassed for all test cases. Differences in DVH limits were checked.
    Accurate Dose Index calculationPassed for all test cases. Differences in dose indices were checked.
    Accurate 3D Dose Distribution representationPassed for all test cases.
    Accurate Dose Profile representationPassed for all test cases.
    Accurate Gamma Distribution calculationPassed for all test cases.
    Correct Pass/Fail results for DVH limitsPassed for all test cases.
    Accurate 3D Gamma Passing Rate calculationPassed for all test cases.
    Acceptable differences in dose indices compared to FDA-cleared TPSPassed for all test cases.

    2. Sample Size and Data Provenance

    • Sample Size for Test Set: 20 patients.
    • Case Distribution:
      • 10 samples for Head and Neck cancers.
      • 5 samples for Chest cancers.
      • 5 samples for Abdomen cancers.
      • Specific cancer types mentioned: Brain, Lung, Head and Neck, and GI cancers.
    • Data Provenance: The document does not explicitly state the country of origin or if the data was retrospective or prospective. However, it mentions that the patients "have been treated with IMRT and VMAT techniques," suggesting the use of retrospective data from actual patient treatments. The joint testing devices included two FDA-cleared LINACs and two FDA-cleared TPS systems, implying that these treatment plans and potentially associated patient data originated from clinical settings where these devices are used.

    3. Number and Qualifications of Experts for Ground Truth

    The document does not explicitly state the number of experts used to establish the ground truth or their specific qualifications (e.g., radiologist with X years of experience). However, the ground truth is established by the "FDA cleared TPS" (Treatment Planning Systems), implying that the generated dose calculations from these cleared systems serve as the reference for accuracy. The expertise is inherently embedded in the design and validation of these cleared TPS and the physics teams operating them in clinical practice.

    4. Adjudication Method

    The document does not describe a specific adjudication method (e.g., 2+1, 3+1). The evaluation appears to be a direct comparison between the AccuCheck's calculations and those of the FDA-cleared TPS, with the "test criteria" seemingly pre-defined based on acceptable differences or concordance metrics.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was mentioned. The study described focuses on the device's standalone performance compared to established TPS. There is no information provided regarding human readers improving with AI vs. without AI assistance.

    6. Standalone Performance Study

    Yes, a standalone performance study was conducted. The "Secondary Dose Calculation verification test" directly assesses the AccuCheck device's ability to perform secondary dose calculations independently and accurately when compared against established FDA-cleared TPS. The device operates without human intervention in its calculation process for this specific test.

    7. Type of Ground Truth Used

    The ground truth used for the secondary dose calculation verification test is based on the dose calculations generated by FDA-cleared Treatment Planning Systems (TPS). This implies a reference to established and regulatory-approved dose calculation methods.

    8. Sample Size for the Training Set

    The document does not provide information about the sample size used for the training set. The descriptions focus solely on the verification and validation (test) phase.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for any training set was established, as it does not explicitly mention a training phase for the device development. The performance data section describes verification and validation using established TPS as the reference.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223724
    Device Name
    MOZI TPS
    Date Cleared
    2023-07-10

    (209 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MOZI Treatment Planning System (MOZI TPS) is used to plan radiotherapy treatments with malignant or benign diseases. MOZI TPS is used to plan external beam irradiation with photon beams.

    Device Description

    The proposed device, MOZI Treatment Planning System (MOZI TPS), is a standalone software which is used to plan radiotherapy treatments (RT) for patients with malignant or benign diseases. Its core functions include image processing, structure delineation, plan design, optimization and evaluation. Other functions include user login, graphical interface, system and patient management. It can provide a platform for completing the related work of the whole RT plan.

    AI/ML Overview

    The provided text describes the performance data for the MOZI TPS device, focusing on its automatic contouring (structure delineation) feature. Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided document:

    1. A table of acceptance criteria and the reported device performance

    The primary acceptance criterion mentioned for structure delineation (automatic contouring) is based on the Mean Dice Similarity Coefficient (DSC). The study aimed to demonstrate non-inferiority compared to a reference device (AccuContour™ - K191928). While explicit thresholds for "acceptable" Mean DSC values are not given as numerical acceptance criteria in the table below, the text states "The result demonstrated that they have equivalent performance," implying that the reported DSC values met the internal non-inferiority standard set by the manufacturer against the performance of the reference device.

    Body PartOARAcceptance Criterion (Implicit)Reported Mean DSC valuesMean standard deviation
    Head&NeckMean DSC non-inferior to reference device (AccuContour™ - K191928)
    Brainstem"equivalent performance" to K1919280.880.03
    BrachialPlexus_L"equivalent performance" to K1919280.610.05
    BrachialPlexus_R"equivalent performance" to K1919280.640.05
    Esophagus"equivalent performance" to K1919280.840.02
    Eye-L"equivalent performance" to K1919280.930.02
    Eye-R"equivalent performance" to K1919280.930.02
    InnerEar-L"equivalent performance" to K1919280.780.06
    InnerEar-R"equivalent performance" to K1919280.820.04
    Larynx"equivalent performance" to K1919280.870.02
    Lens-L"equivalent performance" to K1919280.770.07
    Lens-R"equivalent performance" to K1919280.720.08
    Mandible"equivalent performance" to K1919280.900.02
    MiddleEar_L"equivalent performance" to K1919280.730.04
    MiddleEar_R"equivalent performance" to K1919280.740.04
    OpticNerve_L"equivalent performance" to K1919280.610.07
    OpticNerve_R"equivalent performance" to K1919280.620.08
    OralCavity"equivalent performance" to K1919280.900.03
    OpticChiasm"equivalent performance" to K1919280.640.10
    Parotid-L"equivalent performance" to K1919280.830.03
    Parotid-R"equivalent performance" to K1919280.830.04
    PharyngealConstrictors_U"equivalent performance" to K1919280.870.03
    PharyngealConstrictors_M"equivalent performance" to K1919280.880.02
    PharyngealConstrictors_L"equivalent performance" to K1919280.870.03
    Pituitary"equivalent performance" to K1919280.740.14
    SpinalCord"equivalent performance" to K1919280.850.04
    Submandibular_L"equivalent performance" to K1919280.860.04
    Submandibular_R"equivalent performance" to K1919280.870.03
    TemporalLobe_L"equivalent performance" to K1919280.890.03
    TemporalLobe_R"equivalent performance" to K1919280.890.03
    Thyroid"equivalent performance" to K1919280.860.03
    TMJ_L"equivalent performance" to K1919280.790.06
    TMJ_R"equivalent performance" to K1919280.740.06
    Trachea"equivalent performance" to K1919280.900.02
    ThoraxEsophagus"equivalent performance" to K1919280.800.05
    Heart"equivalent performance" to K1919280.980.01
    Lung_L"equivalent performance" to K1919280.990.00
    Lung_R"equivalent performance" to K1919280.990.00
    Spinal Cord"equivalent performance" to K1919280.970.02
    Trachea"equivalent performance" to K1919280.950.02
    AbdomenDuodenum"equivalent performance" to K1919280.640.05
    Kidney_L"equivalent performance" to K1919280.960.02
    Kidney_R"equivalent performance" to K1919280.970.01
    Liver"equivalent performance" to K1919280.950.02
    Pancreas"equivalent performance" to K1919280.790.04
    SpinalCord"equivalent performance" to K1919280.820.02
    Stomach"equivalent performance" to K1919280.890.02
    Pelvic-ManBladder"equivalent performance" to K1919280.920.03
    BowelBag"equivalent performance" to K1919280.890.04
    FemurHead_L"equivalent performance" to K1919280.960.02
    FemurHead_R"equivalent performance" to K1919280.950.02
    Marrow"equivalent performance" to K1919280.900.02
    Prostate"equivalent performance" to K1919280.850.04
    Rectum"equivalent performance" to K1919280.880.03
    SeminalVesicle"equivalent performance" to K1919280.720.07
    Pelvic-FemaleBladder"equivalent performance" to K1919280.880.02
    BowelBag"equivalent performance" to K1919280.870.02
    FemurHead_L"equivalent performance" to K1919280.960.02
    FemurHead_R"equivalent performance" to K1919280.950.02
    Marrow"equivalent performance" to K1919280.890.02
    Rectum"equivalent performance" to K1919280.770.04

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: 187 image sets (CT structure models).
    • Data Provenance: The testing image source is from the United States. The data is retrospective, as it consists of existing CT datasets.
      • Patient demographics: 57% male, 43% female. Ages: 21-30 (0.3%), 31-50 (31%), 51-70 (51.3%), 71-100 (14.4%). Race: 78% White, 12% Black or African American, 10% Other.
      • Anatomical regions: Head and Neck (20.3%), Esophageal and Lung (Thorax, 20.3%), Gastrointestinal (Abdomen, 20.3%), Prostate (Male Pelvis, 20.3%), Female Pelvis (18.7%).
      • Scanner models: GE (28.3%), Philips (33.7%), Siemens (38%).
      • Slice thicknesses: 1mm (5.3%), 2mm (28.3%), 2.5mm (2.7%), 3mm (23%), 5mm (40.6%).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: Six
    • Qualifications of experts: Clinically experienced radiation therapy physicists.

    4. Adjudication method for the test set

    • Adjudication method: Consensus. The ground truth was "generated manually using consensus RTOG guidelines as appropriate by six clinically experienced radiation therapy physicists." This implies that the experts agreed upon the ground truth for each case.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader, multi-case comparative effectiveness study was not performed to assess human reader improvement with AI assistance. The study focused on the standalone performance of the AI algorithm (automatic contouring) and its comparison to a reference device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, a standalone performance evaluation of the automatic segmentation algorithm was performed. The reported Mean DSC values are for the MOZI TPS device's auto-segmentation function without direct human-in-the-loop interaction during the segmentation process. The comparison to the reference device AccuContour™ (K191928) was also a standalone comparison.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Expert consensus. The ground truth was "generated manually using consensus RTOG guidelines as appropriate by six clinically experienced radiation therapy physicists."

    8. The sample size for the training set

    • Training Set Sample Size: 560 image sets (CT structure models).

    9. How the ground truth for the training set was established

    • The document states that the training image set source is from China. It does not explicitly detail the method for establishing ground truth for the training set. However, given that the ground truth for the test set was established by "clinically experienced radiation therapy physicists" using "consensus RTOG guidelines," it is highly probable that a similar methodology involving expert delineation and review was used for the training data to ensure high-quality labels for the deep learning model. The statement that "They are independent of each other" (training and testing sets) implies distinct data collection and ground truth establishment processes, but the specific details for the training set are not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221706
    Device Name
    AccuContour
    Date Cleared
    2023-03-09

    (269 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    It is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    Device Description

    The proposed device, AccuContour, is a standalone software which is used by radiation oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has two image processing functions:

    • (1) Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,
    • (2) Automatic registration: rigid and deformable registration, and
    • (3) Manual contouring.

    It also has the following general functions:

    • Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    • Patient management;
    • Review of processed images;
    • Extension tool;
    • Plan evaluation and plan comparison;
    • Dose analysis.
    AI/ML Overview

    This document (K221706) is a 510(k) Premarket Notification for the AccuContour device by Manteia Technologies Co., Ltd. It declares substantial equivalence to a predicate device and several reference devices. The focus here is on the performance data related to the "Deep learning contouring" feature and the "Automatic registration" feature.

    Based on the provided document, here's a detailed breakdown of the acceptance criteria and the study proving the device meets them:

    I. Acceptance Criteria and Reported Device Performance

    The document does not explicitly provide a clear table of acceptance criteria and the reported device performance for the deep learning contouring in the format requested. Instead, it states that "Software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." This implies that internal acceptance criteria were met, but these specific criteria and the detailed performance results (e.g., dice scores, Hausdorff distance for contours) are not disclosed in this summary.

    However, for the deformable registration, it provides a comparative statement:

    FeatureAcceptance Criteria (Implied)Reported Device Performance
    Deformable RegistrationNon-inferiority to reference device (K182624) based on Normalized Mutual Information (NMI)The NMI value of the proposed device was non-inferior to that of the reference device.

    It's important to note:

    • For Deep Learning Contouring: No specific performance metrics or acceptance criteria are listed in this 510(k) summary. The summary only broadly mentions that the software "can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas." The success is implicitly covered by the "Software verification and validation testing" section.
    • For Automatic Registration: The criterion is non-inferiority in NMI compared to a reference device. The specific NMI values are not provided, only the conclusion of non-inferiority.

    II. Sample Size and Data Provenance

    • Test Set (for Deformable Registration):
      • Sample Size: Not explicitly stated as a number, but described as "multi-modality image sets from different patients."
      • Data Provenance: "All fixed images and moving images are generated in healthcare institutions in U.S." This indicates prospective data collection (or at least collected with the intent for such testing) from the U.S.
    • Training Set (for Deep Learning Contouring):
      • Sample Size: Not explicitly stated in the provided document.
      • Data Provenance: Not explicitly stated in the provided document.

    III. Number of Experts and Qualifications for Ground Truth

    • For the Test Set (Deformable Registration): The document does not mention the use of experts or ground truth establishment for the deformable registration test beyond the use of NMI for "evaluation." NMI is an image similarity metric and does not typically require human expert adjudication of registration quality in the same way contouring might.
    • For the Training Set (Deep Learning Contouring): The document does not specify the number of experts or their qualifications for establishing ground truth for the training set.

    IV. Adjudication Method for the Test Set

    • For Deformable Registration: Not applicable in the traditional sense, as NMI is an objective quantitative metric. There's no mention of human adjudication for registration quality here.
    • For Deep Learning Contouring (Test Set): The document notes there was no clinical study included in this submission. This implies that if a test set for the deep learning contouring was used, its ground truth (and any adjudication process for it) is not described in this 510(k) summary. Given the absence of a clinical study, it's highly probable that ground truth for performance evaluation of deep learning contouring was established internally through expert consensus or other methods, but details are not provided.

    V. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done?: No, an MRMC comparative effectiveness study was not reported. The document explicitly states: "No clinical study is included in this submission."
    • Effect Size: Not applicable, as no such study was performed or reported.

    VI. Standalone (Algorithm Only) Performance Study

    • Was it done?: Yes, for the deformable registration feature. The NMI evaluation was "on two sets of images for both the proposed device and reference device (K182624), respectively." This is an algorithm-only (standalone) comparison.
    • For Deep Learning Contouring: While the deep learning contouring is a standalone feature, the document does not provide details of its standalone performance evaluation (e.g., against expert ground truth). It only states that software verification and validation were performed to meet acceptance criteria.

    VII. Type of Ground Truth Used

    • Deformable Registration: The "ground truth" for the deformable registration evaluation was implicitly the images themselves, with NMI being used as a metric to compare the alignment achieved by the proposed device versus the reference device. It's an internal consistency/similarity metric rather than a "gold standard" truth established by external means like pathology or expert consensus.
    • Deep Learning Contouring: Not explicitly stated in the provided document. Given that it's an AI-based contouring tool and no clinical study was performed, the ground truth for training and internal testing would typically be established by expert consensus (e.g., radiologist or radiation oncologist contours) or pathology, but the document does not specify.

    VIII. Sample Size for the Training Set

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration.

    IX. How Ground Truth for the Training Set was Established

    • Not explicitly stated in the provided document for either the deep learning contouring or the automatic registration. For deep learning, expert-annotated images are the typical method, but details are absent here.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1