Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K082606
    Date Cleared
    2008-11-10

    (63 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CDMS is a Microsoft Windows based software application designed to record and manage physics data acquired during acceptance testing, commissioning and calibration of radiation therapy treatment devices. In addition, CDMS uses the same physics data to allow users to perform MU calculations based on treatment field parameters that are either imported from the treatment planning system or entered manually. CDMS is also used to manage linac calibration using standard protocols.

    Device Description

    CDMS is a software program designed to record radiation beam data acquired during the commissioning, acceptance testing and calibration of a radiation therapy treatment device.

    AI/ML Overview

    The provided text describes the D3 Radiation Planning's CDMS (Commissioning Data Management System). Here's an analysis of the acceptance criteria and study information:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of acceptance criteria with numerical targets and the reported device performance against those targets. Instead, it relies on a "side by side comparison" of monitor unit/dose calculations and linear accelerator calibration against established protocols (AAPM Task Group 51 and 40) as evidence of substantial equivalence to predicate devices (RadCalc V4.0 and IMSure).

    The non-clinical tests involved:

    • Importing measured physics data.
    • Performing numerous monitor unit/dose calculations.
    • Calibrating a linear accelerator according to TG-51.

    The conclusion states that "Side by side comparison tables are shown in the supporting Validation & Verification documentation," implying that the device's calculations and data management capabilities were found to align with the expected performance as defined by these existing radiation therapy physics standards and predicate devices.

    Implied Acceptance Criteria (based on comparison to predicate devices and standards):

    Acceptance Criteria (Implied)Reported Device Performance
    Accurate monitor unit/dose calculations (comparable to predicates and established physics data)"Performed numerous monitor unit/dose calculations" showed "side by side comparison tables" match supporting Validation & Verification documentation.
    Accurate linear accelerator calibration management (according to AAPM TG-51 and TG-40)"Calibrated a linear accelerator according to TG-51."
    Effective management of physics data (recording, storage, accessibility)"Allowed for proper storage of calibration parameters as well as better management of calibration reports."
    Substantial equivalence to predicate devices (RadCalc V4.0 and IMSure)Concluded to be substantially equivalent based on intended use, technological characteristics, and non-clinical testing.

    Study Information

    Due to the nature of the device (software for data management and calculation, not directly involved in patient treatment delivery), the study conducted was non-clinical.

    1. Sample Size used for the test set and the data provenance:

      • Test Set Sample Size: The document does not specify a numerical sample size for the "numerous monitor unit/dose calculations" or the calibration tests. It refers to "measured physics data" being imported, suggesting the use of real-world or simulated clinical data.
      • Data Provenance: Not explicitly stated, but the mention of "measured physics data acquired during the commissioning, acceptance testing and calibration of a radiation therapy treatment device" implies that the data would be typical of those used in radiation oncology departments. The comparison to "peer reviewed/published or manufacturer provided measured values" suggests a mix of external and internal data sources. It is implicitly retrospective as it involves existing measured physics data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified. The ground truth appears to be based on established physics principles (Khan (Classical) algorithm), "peer reviewed/published or manufacturer provided measured values," and compliance with American Association of Physicists in Medicine (AAPM) Task Group 51 and 40 recommendations. These are considered authoritative sources in medical physics.
    3. Adjudication method for the test set:

      • Not applicable as it was not a reader study. The "adjudication" was effectively a comparison against established physical models, measured data, and existing predicate device performance (RadCalc V4.0 and IMSure).
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study was not conducted. This device is a software tool for physics data management and calculation, not an AI-assisted diagnostic or interpretive tool for human readers.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Yes, implicitly. The non-clinical tests involved CDMS (the algorithm) importing data and performing calculations. The evaluation was of the software's output against known physical parameters and established methods, hence standalone performance. Human input is for data entry and reviewing results, but the core calculation is algorithmic.
    6. The type of ground truth used:

      • Expert Consensus/Established Standards: The ground truth for calculations and calibration was based on:
        • The Khan (Classical) algorithm for MU calculation.
        • "Peer reviewed/published or manufacturer provided measured values."
        • Recommendations from AAPM Task Group 51 (for linac calibration) and AAPM Task Group 40 (for monthly QA parameters).
        • Comparison to the performance of predicate devices (RadCalc V4.0 and IMSure).
    7. The sample size for the training set:

      • This is not an AI/machine learning device that requires a training set in the conventional sense. The software uses established physics algorithms (e.g., Khan (Classical) algorithm) which are programmed based on physical laws and validated against measured data, rather than "trained" on a dataset. The "measured physics data" mentioned are used as input for calculations and management, not for training a model.
    8. How the ground truth for the training set was established:

      • Not applicable, as it's not a machine learning device. The algorithms are based on established scientific principles and formulas in radiation physics.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1