Search Results
Found 1 results
510(k) Data Aggregation
(85 days)
Radiance V3
Radiance V3 is a software system intented for treatment planning and analysis of radiation therapy administered with devices suitable for intraoperative radiotherapy.
The treatment plans provide treatment unit set-up parameters and estimates of dose distributions expected during the proposed treatment, and may be used to administer treatments after review and approval by the intended user. The system functionality can be configured based on user needs.
The intended users of Radiance V3 shall be clinically qualified radiation therapy staff trained in using the system.
Radiance V3 is a treatment planning system, that is, a software program for planning and analysis of radiation therapy plans. Typically, a treatment plan is created by importing patient images obtained from a CT scanner, defining regions of interest either manually or semi-automatically, deciding on a treatment setup and objectives, optimizing the treatment parameters, comparing alternative plans to find the best compromise, computing the clinical dose distribution, approving the plan and exporting it.
Radiance V3 improvements over Radiance V2 (K133655) are:
- A Hybrid Monte Carlo dose computation algorithm for photons for the ● INTRABEAM.
- . A beam modeling tool which models and verifies the treatment unit model with measurements for the INTRABEAM.
- . An improved DICOM interface including PACS query&retrieve functionality. Storage SCP and DICOM.RT Structures and Dose exportation.
The list of compatible IOERT/IORT devices are:
- Intrabeam (a Carl Zeiss product) ●
- . NOVAC7 and NOVAC11 (a SIT product)
- . LIAC10 and LIAC12 (a SIT product)
- MOBETRON (an IntraOp Medical product)
- . Conventional LINACs with adapted cylindrical IOERT applicators (telescopic or fixed ones).
Radiance V3 has been tested with Elekta/Precise and Varian/21EX LINACs with particular IOERT cylindrical telescopic applicators.
Characteristics of radiance include:
-
- Image manipulation and visualization
-
- IORT applicator simulation
-
- Contouring manual and interpolation tools
-
- Dose calculation algorithms, including:
- a. For Intrabeam, Dose Painting for a fast (a few seconds) interpolation of PDD around the applicator or Hybrid Monte Carlo for a good combination of computation time (a few minutes) and accuracy.
- b. For IOERT, pencil beam for a fast (less than one minute) calculation of the dose or Monte Carlo for a good combination of computation time (between 1-10 minutes in most of the cases) and accuracy.
-
- Reporting.
-
- DICOM & DICOM.RT compatibility
Here's an analysis of the acceptance criteria and study information for the Radiance V3 device, based on the provided text:
Important Note: The provided document is a 510(k) summary, which often focuses on demonstrating substantial equivalence to a predicate device rather than presenting extensive de novo clinical trial data. Therefore, detailed performance metrics and statistical analyses typically found in full clinical study reports are not explicitly detailed here. The information below is extracted and inferred from the available text.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., specific accuracy thresholds for dose calculation). Instead, it focuses on verifying that the device meets its predefined product requirements and relevant industry standards. The performance is reported as meeting these requirements and passing various tests.
Acceptance Criterion (Inferred from the document) | Reported Device Performance |
---|---|
New functionality requirements (Hybrid Monte Carlo for INTRABEAM) | Verification tests were written and executed, and the system passed these tests. The Hybrid Monte Carlo algorithm corrects the dose according to tissue density, providing a more accurate simulation of the dose. |
Risk mitigation functions | Tests were executed to ensure risk mitigation functions as intended. The system passed these tests. |
Continued safety and effectiveness of existing functionality (Regression Testing) | Regression tests were conducted to ensure continued safety and effectiveness of existing functionality. The system passed these tests. |
Compliance with predefined product requirements | Validation and Verification Testing indicated that Radiance V3 meets its predefined product requirements. |
Compliance with product standards | Validation and Verification Testing indicated compliance with: |
• IEC 61217 Radiotherapy equipment - Coordinates, movements and scales | |
• IEC 62083 Medical electrical equipment - Requirements for the safety of radiotherapy treatment planning systems | |
• IEC 62366 Medical devices - Application of usability engineering to medical devices | |
Accuracy of dose calculation functions (simulated clinical setup) | Validation testing involved algorithm testing which validated the accuracy of dose calculation functions using a simulated clinical setup. No specific quantitative accuracy metrics (e.g., % difference from a reference) are provided, but the product was "deemed fit for clinical use." The new Hybrid Monte Carlo algorithm is explicitly stated to provide "a more accurate simulation of the dose received to the tissue" compared to the predicate's 'Dose Painting' method. Algorithms were confirmed for a wide variety of field geometries, treatment units, setups, and patient positions. |
Overall Safety and Effectiveness | Radiance V3 passed testing and was deemed safe and effective for its intended use, demonstrating substantial equivalence to the predicate device. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify a "test set" in terms of number of cases or patients. Instead, it refers to "Over 150 tests" that were executed, including verification tests for new functionality, risk mitigation, and regression tests. These tests involved simulated clinical workflows and algorithm testing with simulated setups. It's not clear if these 150+ tests refer to individual "cases" or different functional aspects.
- Data Provenance: The testing involved "simulated clinical workflows" and "algorithm testing using a simulated clinical setup." This implies that the data was not derived from real patient studies but rather from synthetic or phantom data to simulate various treatment scenarios. Therefore, there is no country of origin for real patient data, and it is a simulated/retrospective (in the sense of being pre-designed scenarios) evaluation rather than prospective clinical data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not provide information on the number of experts or their qualifications used to establish ground truth for the test set. Given that the testing involved "simulated clinical workflows" and "algorithm testing using a simulated clinical setup," the "ground truth" likely refers to established physical models, reference dose calculations (e.g., from more precise scientific methods or established phantoms), or mathematically derived correct outputs for the simulated scenarios.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method. Since the ground truth for the test set seems to be derived from physical/mathematical models or established reference calculations in simulated environments, a human adjudication process by multiple experts is not directly applicable in the same way it would be for interpreting medical images. The evaluation of results against the expected simulated outcomes would be done internally by the engineering and testing teams.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical trials were not performed as part of the development of this product." Therefore, there is no information on the effect size of human readers improving with or without AI assistance, as human reader performance was not evaluated.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance evaluation was done. The entire "Performance Data" section (Section 7) describes verification and validation testing performed on the Radiance V3 system itself. This included:
- Validation and Verification Testing against predefined product requirements and industry standards.
- Algorithm testing to validate the accuracy of dose calculation functions using a simulated clinical setup.
- Over 150 tests executed for new functionality, risk mitigation, and regression.
This testing addresses the performance of the algorithm and software in a standalone capacity, without direct human-in-the-loop performance measurement.
7. Type of Ground Truth Used
The ground truth used was primarily based on:
- Predefined product requirements: The system's output was compared against its own specified functionalities and performance targets.
- Industry standards (IEC 61217, 62083, 62366): Compliance with these standards served as a form of ground truth for safety, design, and usability.
- Simulated clinical setups and reference dose calculations: For dose calculation accuracy, the "ground truth" would have been established by known physical principles, validated models, or highly accurate computational methods for the simulated scenarios, confirming whether the algorithm's output matched the expected physical reality.
- Predicate device comparison: The substantial equivalence argument relies on the Radiance V3 performing comparably or better than the predicate (Radiance V2), particularly with the new Hybrid Monte Carlo algorithm offering "a more accurate simulation."
8. Sample Size for the Training Set
The document does not provide information on the sample size for the training set. Treatment planning software like Radiance V3, while complex, typically relies on established physics models and algorithms rather than machine learning models that require explicit "training sets" in the sense of labeled data for supervised learning. The algorithms are built upon scientific principles and validated against known physical phenomena, not "trained" on a dataset of cases. Therefore, a training set in the typical AI/ML context is not applicable here.
9. How the Ground Truth for the Training Set Was Established
As noted above, the concept of a "training set" and associated "ground truth" in the machine learning sense is not applicable to this type of physics-based treatment planning software. The algorithms are based on physics equations and models validated through established scientific methods and comparisons to physical measurements (e.g., beam modeling and output factors for INTRABEAM measurements mentioned in the "Beam modeling tool" section).
Ask a specific question about this device
Page 1 of 1