Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K233236
    Device Name
    Radiance V5
    Date Cleared
    2024-05-17

    (232 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Radiance V5

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Radiance V5 is a software system intended for treatment planning and analysis of radiation therapy administered with devices suitable for intraoperative radiotherapy.

    The treatment plans provide treatment unit set-up parameters and estimates of dose distributions expected during the proposed treatment, and may be used to administer treatments after review and approval by the intended user.

    The system functionality can be configured based on user needs.

    The intended users of Radiance V5 shall be clinically qualified radiation therapy staff trained in using the system.

    Device Description

    Radiance V5 is a treatment simulation tool with faster Monte-Carlo simulation, multi-modal planning based on modern imaging standard for intraoperative radiotherapy (IORT). A software program for planning and analysis of radiation therapy plans. Typically, a treatment plan is created by importing patient images obtained from a CT scanner, defining regions of interest , deciding on a treatment setup and objectives, optimizing the treatment parameters, comparing alternative plans to find the best compromise, computing the clinical dose distribution, approving the plan and exporting it.

    AI/ML Overview

    The provided text, a 510(k) summary for the Radiance V5 device, does not contain detailed information about the specific acceptance criteria and the study proving the device meets those criteria, particularly in the context of performance metrics like accuracy, sensitivity, or specificity.

    Instead, the document focuses on:

    • Substantial Equivalence: Arguing that Radiance V5 is substantially equivalent to its predicate device (Radiance V4, K171885).
    • Technological Characteristics: Highlighting similarities in functionality and workflow, and noting improvements like faster Monte-Carlo simulation (GPU-based Hybrid Monte Carlo) and a redesigned UI.
    • Non-Clinical Data: Stating that validation and verification testing indicated the device meets predefined product requirements and standards (IEC 61217, IEC 62083, IEC 62304, IEC 62366).
    • Clinical Data (Limited): Referencing a clinical study performed for an even earlier predecessor (Radiance, K112060) to evaluate effectiveness and repeatability of the planning process in IORT. It then claims this data is safely extrapolatable to V5 because the changes in V5 do not modify basic functionality or workflow.
    • Software V&V: Stating conformance with IEC 62304 and FDA guidance for software.

    Crucially, the document does not provide a table of acceptance criteria with corresponding performance results for Radiance V5 itself, nor does it describe a study specifically designed to prove Radiance V5 meets quantitative performance metrics like those typically required for AI/ML-based diagnostic or treatment optimization tools (e.g., AUC, sensitivity, specificity, or error rates compared to ground truth).

    The "Performance Data" section vaguely mentions "predefined products requirements" and "validation and verification testing" for non-clinical data, but does not elaborate on what these requirements or results were.

    Given the information provided in the document:

    1. A table of acceptance criteria and the reported device performance:

    • Acceptance Criteria: Not explicitly detailed in quantitative, verifiable metrics (e.g., specific thresholds for dose accuracy, planning time, or UI usability scores). The document mentions meeting "predefined product requirements" and requirements from standards like IEC 61217, IEC 62083, IEC 62304, and IEC 62366. These standards typically cover aspects like safety, software lifecycle processes, usability engineering, and equipment-specific requirements, but do not necessarily define specific performance thresholds for a treatment planning algorithm's output accuracy against a clinical ground truth.
    • Reported Device Performance: The document highlights improvements in "speed of the Hybrid Monte Carlo calculations" as a key performance enhancement over previous versions, making "less precise calculations unnecessary." It also states that validation and verification testing was "carried out on Radiance V5 indicates that it meets its predefined products requirements." No specific numerical performance data (e.g., accuracy, precision) are reported for Radiance V5's core treatment planning capabilities.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: Not specified for Radiance V5. The document only references a clinical study for a predecessor (Radiance, K112060), but does not provide details about its sample size, data provenance (e.g., country of origin), or whether it was retrospective or prospective. It asserts that this prior study's data "can be safely extrapolated and is also valid for Radiance V5."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. The document does not describe the establishment of a ground truth for performance testing of Radiance V5 or its direct predecessor (V4). The "clinical study" mentioned for the even earlier Radiance (K112060) is vaguely described as evaluating "effectiveness and repeatability of the planning process," but details on ground truth establishment from experts are absent.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified. This level of detail about test set design is not present in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC study is detailed for Radiance V5. The document focuses on the device as a standalone treatment planning software that assists clinicians, rather than an AI that augments human interpretation in a diagnostic context requiring MRMC studies (which are more common for AI-assisted diagnostic imaging interpretation). The "clinical study" mentioned for the predecessor evaluated the "effectiveness and repeatability of the planning process," not human reading improvement.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • The document implies that non-clinical validation and verification testing was done on the software itself to ensure it met "predefined product requirements" and standards. This would effectively be a standalone performance evaluation against defined specifications, but the specifics of what was measured and how (e.g., dose calculation accuracy compared to a gold standard physics model) are not provided. The comparison of Monte Carlo calculations in V5 to less precise calculations in prior versions suggests an internal performance metric, but not a direct standalone clinical performance evaluation against a human or true outcome.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not specified. For a treatment planning system, ground truth would typically relate to the accuracy of dose calculations compared to a physics model or measurements, or the clinical efficacy and safety of the plans generated. The document only refers to the predecessor's study evaluating "effectiveness and repeatability of the planning process" and "current uncertainties in regard to (manual) treatment planning," which suggests a comparison to manual methods but doesn't define a specific ground truth.

    8. The sample size for the training set:

    • Not applicable as described. Radiance V5 is described as a "software program for planning and analysis of radiation therapy plans" with a "three dimensional dosimetry engine" and "Hybrid Monte Carlo dose computations." It's not presented as an AI/ML model that would typically have a distinct "training set" of patient data in the modern supervised learning sense. While it's software that performs complex calculations, the term "AI" is not explicitly used, and its "engine" implies a deterministic algorithm rather than a learned model requiring a training dataset.

    9. How the ground truth for the training set was established:

    • Not applicable. As noted above, the device is not described as utilizing a training set in the AI/ML context. Its enhancements ("faster Monte-Carlo simulation," "redesigned user interface workflow," "multi-modal multi-image planning") appear to be engineering and algorithmic improvements rather than AI model training on a dataset with established ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1