Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K220582
    Manufacturer
    Date Cleared
    2022-08-22

    (174 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K193381, K201798

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClearCalc is intended to assist radiation treatment planners in determining if their treatment planning calculations are accurate using an independent Monitor Unit (MU) and dose calculation algorithm.

    Device Description

    The ClearCalc Model RADCA V2 device is software that uses treatment data, image data, and structure set data obtained from supported Treatment Planning System and Application Programming interfaces to perform a dose and/or monitor unit (MU) calculation on the incoming treatment planning parameters. It is designed to assist radiation treatment planners in determining if their treatment planning calculations are accurate using an independent Monitor Unit (MU) and dose calculation algorithm.

    AI/ML Overview

    The provided text describes ClearCalc Model RADCA V2 and its substantial equivalence to predicate devices. However, the document does NOT contain a detailed study proving the device meets specific acceptance criteria in the manner of a clinical trial or a performance study with detailed statistical results. Instead, it states that "Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements. Validation testing was performed to ensure that the software was behaving as intended, and results from ClearCalc were validated against accepted results for known planning parameters from clinically-utilized treatment planning systems."

    Therefore, I can provide the acceptance criteria as stated for the ClearCalc Model RADCA V2's primary dose calculation algorithms and the monte carlo calculations, but comprehensive study details such as sample size, data provenance, expert ground truth, adjudication methods, or separate training/test sets are not available in the provided text.

    Here's a summary of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Calculation AlgorithmAcceptance CriteriaReported Device Performance
    FSPB (Photon Plans)Passing criteria consistent with Predicate Device ClearCalc Model RADCAPassed in all test cases
    TG-71 (Electron Plans)Passing criteria consistent with Predicate Device ClearCalc Model RADCAPassed in all test cases
    TG-43 (Brachytherapy Plans)Passing criteria consistent with Predicate Device ClearCalc Model RADCAPassed in all test cases
    Monte Carlo CalculationsGamma analysis passing rate of >93% with +/-3% relative dose agreement and 3mm Distance To Agreement (DTA)Passed in all test cases

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not explicitly stated. The text mentions "all test cases" without quantifying the number of cases.
    • Data Provenance: Not explicitly stated. The text refers to "known planning parameters from clinically-utilized treatment planning systems," suggesting the data would be representative of clinical use but does not specify country of origin or if it's retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not specified.
    • Qualifications of Experts: The ground truth was established from "accepted results for known planning parameters from clinically-utilized treatment planning systems." This implies that the ground truth is derived from established clinical practices and systems, which are typically validated by qualified medical physicists and radiation oncologists, but specific expert involvement in this validation is not detailed.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Adjudication Method: Not specified. The validation relies on comparison to "accepted results."

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No. The device is a "Secondary Check Quality Assurance Software" designed to assist radiation treatment planners by providing independent calculation. It does not involve human readers making diagnoses or interpretations that would be augmented by AI in a MRMC study context.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Yes, the described verification and validation tests assess the algorithm's performance against "accepted results" from clinical systems, which is a standalone evaluation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Ground Truth Type: "Accepted results for known planning parameters from clinically-utilized treatment planning systems." This implies a form of established, clinically validated ground truth based on the outputs of other (predicate/reference) treatment planning systems the device is checking.

    8. The sample size for the training set:

    • Training Set Sample Size: Not specified. The text focuses on verification and validation testing, not on the training of the underlying algorithms.

    9. How the ground truth for the training set was established:

    • Training Set Ground Truth: Not specified. As the document focuses on validation rather than algorithm training, this information is not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1