Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K141283
    Date Cleared
    2014-08-07

    (83 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.

    Device Description

    The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation, (brachytherapy) treatments.

    AI/ML Overview

    The provided text is a 510(k) Premarket Notification for the Eclipse Treatment Planning System, a software device used for radiotherapy treatment planning. It details the device's features, comparison to predicate devices, and a summary of non-clinical testing.

    However, the document does not contain information about acceptance criteria or a study proving the device meets these criteria in the context of an AI/ML algorithm's performance (e.g., accuracy, sensitivity, specificity, or human improvement with AI assistance).

    Instead, the document focuses on demonstrating substantial equivalence to a predicate device primarily through feature comparison and general verification/validation of software functionality.

    Therefore, most of the specific questions regarding acceptance criteria, performance metrics, sample sizes for test/training sets, expert involvement, and ground truth establishment cannot be answered from the provided text.

    Here's an attempt to answer the questions based only on the information available:

    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria: The document states that the outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable. This acts as a high-level qualitative acceptance criteria for the entire system's functionality and safety.
    • Reported Device Performance: The document concludes that "Varian therefore considers Eclipse 13.5 to be safe and to perform at least as well as the predicate device." No specific quantitative performance metrics (e.g., accuracy percentages, sensitivity/specificity, or a table of such) are provided for individual features or the system as a whole in the context of an AI/ML model's output. The performance is implied by the successful conclusion of verification and validation without critical discrepancies.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • The document mentions "Verification and Validation were performed for all the new features and regression testing was performed against the existing features of Eclipse." However, it does not specify the sample size of any test set (e.g., number of patient cases, treatment plans) used for this testing.
    • Data Provenance: Not mentioned. It's a software system for planning, so "data" might refer to simulated or clinical patient data, but its origin or nature (retrospective/prospective) is not detailed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not mentioned. The testing described is software verification and validation, not a clinical performance study involving expert image interpretation or similar. The "ground truth" for software testing would typically be based on expected software behavior, calculations, and adherence to specifications rather than expert consensus on medical images.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not mentioned. This concept is more relevant to studies establishing ground truth for diagnostic or prognostic AI models, which is not the primary focus of this submission.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC study was NOT done (or at least not reported in this summary). The document details changes to a treatment planning system, including "RapidPlan (previously known as Dose Volume Histogram Estimation)" and "support for mARC treatment planning by Siemens treatment machines." While RapidPlan involves a type of estimation, the document does not present it as a diagnostic AI requiring human reading assistance with an associated MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document describes "Non-clinical Testing" including "Verification and Validation" and states that "The outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable." This implies significant standalone testing of the software's functionality and calculations. However, no specific performance metrics like accuracy, precision for the algorithms (e.g., AcurosXB, RapidPlan) are presented in a quantitative way for independent assessment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • For a treatment planning system, "ground truth" for testing primarily involves:
      • Mathematical/Physical Models: Comparing calculated dose distributions and parameters against established physics principles and validated models.
      • Predicate Device Comparison: Ensuring new features produce results consistent with or improved over previously validated predicate devices.
      • Software Requirements/Specifications: Ensuring all features function according to their defined specifications.
    • The document indicates that "System requirements created or affected by the changes can be traced to the test outcomes," suggesting that meeting the requirements served as a form of ground truth. It also notes "regression testing was performed against the existing features of Eclipse," implying comparison to the established behavior of the predicate. No mention of expert consensus on medical image ground truth, pathology, or outcomes data is made.

    8. The sample size for the training set

    • Not mentioned. The document describes a treatment planning system rather than a deep learning model requiring explicit training sets of medical images/data in the modern AI sense. While "RapidPlan" involves "Dose Volume Histogram Estimation" which might learn from previous plans, the specifics of its "training set" (if any) are not detailed here.

    9. How the ground truth for the training set was established

    • Not mentioned. As the existence and nature of a "training set" are not explicitly discussed, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1